mirror of
https://github.com/almet/notmyidea.git
synced 2025-04-28 11:32:39 +02:00
uMap first week, datasette, weeknotes
This commit is contained in:
parent
4b24651943
commit
62f4cde0bb
10 changed files with 581 additions and 4 deletions
139
content/code/2023-11-11-datasette-for-tracking.md
Normal file
139
content/code/2023-11-11-datasette-for-tracking.md
Normal file
|
@ -0,0 +1,139 @@
|
|||
---
|
||||
title: Using Datasette for tracking my professional activity
|
||||
tags: Datasette, Graphs, SQL
|
||||
---
|
||||
|
||||
I've been following Simon Willison since quite some time, but I've actually never played with his main project [Datasette](https://datasette.io) before.
|
||||
|
||||
As I'm going back into development, I'm trying to track where my time goes, to be able to find patterns, and just remember how much time I've worked on such and such project. A discussion with [Thomas](https://thom4.net/) made me realize it would be nice to track all this in a spreadsheet of some sort, which I was doing until today.
|
||||
|
||||
Spreadsheets are nice, but they don't play well with rich content, and doing graphs with them is kind of tricky. So I went ahead and setup everything in Datasette.
|
||||
|
||||
First of all, I've imported my `.csv` file into a sqlite database:
|
||||
```bash
|
||||
sqlite3 -csv -header db.sqlite ".import journal.csv journal"
|
||||
```
|
||||
|
||||
Then, I used [sqlite-utils](https://sqlite-utils.datasette.io/en/stable/) to do some tidying and changed the columns names:
|
||||
|
||||
```bash
|
||||
# Rename a column
|
||||
sqlite-utils transform journal --rename "quoi ?" content
|
||||
|
||||
# Make everything look similar
|
||||
sqlite-utils update db.sqlite journal project 'value.replace("Umap", "uMap")'
|
||||
```
|
||||
|
||||
Here is my database schema:
|
||||
|
||||
```bash
|
||||
sqlite-utils schema db.sqlite
|
||||
CREATE TABLE "journal" (
|
||||
[date] TEXT,
|
||||
[project] TEXT,
|
||||
[duration] TEXT,
|
||||
[where] TEXT,
|
||||
[content] TEXT,
|
||||
[paid_work] INTEGER
|
||||
);
|
||||
```
|
||||
|
||||
And then installed datasette, with a few plugins, and ran it:
|
||||
|
||||
```bash
|
||||
pipx install datasette
|
||||
datasette install datasette-render-markdown datasette-write-ui datasette-dashboards datasette-dateutil
|
||||
```
|
||||
|
||||
I then came up with a few SQL queries which are useful:
|
||||
|
||||
How much I've worked per project:
|
||||
|
||||
```SQL
|
||||
sqlite-utils db.sqlite "SELECT project, SUM(CAST(duration AS REAL)) as total_duration FROM journal GROUP BY project;"
|
||||
[{"project": "Argos", "total_duration": XX},
|
||||
{"project": "IDLV", "total_duration": XX},
|
||||
{"project": "Notmyidea", "total_duration": XX},
|
||||
{"project": "Sam", "total_duration": XX},
|
||||
{"project": "uMap", "total_duration": XX}]
|
||||
```
|
||||
|
||||
How much I've worked per week, in total (I've redacted the results for privacy):
|
||||
|
||||
```SQL
|
||||
sqlite-utils db.sqlite "SELECT strftime('%Y-W%W', date) AS week, SUM(CAST(duration AS REAL)) AS hours FROM journal GROUP BY week ORDER BY week;"
|
||||
|
||||
[{"week": "2023-W21", "hours": XX},
|
||||
{"week": "2023-W22", "hours": XX},
|
||||
{"week": "2023-W23", "hours": XX},
|
||||
{"week": "2023-W25", "hours": XX},
|
||||
{"week": "2023-W29", "hours": XX},
|
||||
{"week": "2023-W37", "hours": XX},
|
||||
{"week": "2023-W39", "hours": XX},
|
||||
{"week": "2023-W40", "hours": XX},
|
||||
{"week": "2023-W41", "hours": XX},
|
||||
{"week": "2023-W42", "hours": XX},
|
||||
{"week": "2023-W44", "hours": XX},
|
||||
{"week": "2023-W45", "hours": XX}]
|
||||
```
|
||||
|
||||
I then created a quick dashboard using [datasette-dashboard](https://github.com/rclement/datasette-dashboards), which looks like this:
|
||||
|
||||

|
||||

|
||||
|
||||
Using this configuration:
|
||||
|
||||
```yaml
|
||||
plugins:
|
||||
datasette-render-markdown:
|
||||
columns:
|
||||
- "content"
|
||||
datasette-dashboards:
|
||||
my-dashboard:
|
||||
title: Notmyidea
|
||||
filters:
|
||||
project:
|
||||
name: Projet
|
||||
type: select
|
||||
db: db
|
||||
query: SELECT DISTINCT project FROM journal WHERE project IS NOT NULL ORDER BY project ASC
|
||||
layout:
|
||||
- [hours-per-project]
|
||||
- [entries]
|
||||
- [hours-per-week]
|
||||
charts:
|
||||
hours-per-project:
|
||||
title: Nombre d'heures par projet
|
||||
query: SELECT project, SUM(CAST(duration AS REAL)) as total FROM journal GROUP BY project;
|
||||
db: db
|
||||
library: vega-lite
|
||||
display:
|
||||
mark: { type: arc, tooltip: true }
|
||||
encoding:
|
||||
color: { field: project, type: nominal }
|
||||
theta: { field: total, type: quantitative }
|
||||
hours-per-week:
|
||||
title: Heures par semaine
|
||||
query: SELECT strftime('%Y-W%W', date) AS week, SUM(CAST(duration AS REAL)) AS hours FROM journal GROUP BY week ORDER BY week;
|
||||
db: db
|
||||
library: vega-lite
|
||||
display:
|
||||
mark: { type: bar, tooltip: true }
|
||||
encoding:
|
||||
x: { field: week, type: ordinal}
|
||||
y: { field: hours, type: quantitative }
|
||||
|
||||
entries:
|
||||
title: Journal
|
||||
db: db
|
||||
query: SELECT * FROM journal WHERE TRUE [[ AND project = :project ]] ORDER BY date DESC
|
||||
library: table
|
||||
display:
|
||||
```
|
||||
|
||||
And ran datasette with:
|
||||
|
||||
```bash
|
||||
datasette db.sqlite --root --metadata metadata.yaml
|
||||
```
|
369
content/code/2023-11-11-umap1.md
Normal file
369
content/code/2023-11-11-umap1.md
Normal file
|
@ -0,0 +1,369 @@
|
|||
---
|
||||
title: Adding Real-Time Collaboration to uMap, first week
|
||||
headline: A heads-up on what I've been doing this week on uMap
|
||||
tags: Python, CRDT, Sync
|
||||
---
|
||||
|
||||
Last week, I've been lucky to start working on [uMap](https://github.com/umap-project/umap/), an open-source map-making tool to create and share customizable maps, based on Open Street Map data.
|
||||
|
||||
My goal is to add real-time collaboration to uMap, but **we first want to be sure to understand the issue correctly**. There are multiple ways to solve this, so one part of the journey is to understand the problem properly (then, we'll be able to chose the right path forward).
|
||||
|
||||
Part of the work is documenting it, so expect to see some blog posts around this in the future.
|
||||
|
||||
## Installation
|
||||
|
||||
I've started by installing uMap on my machine, made it work and read the codebase. uMap is written in Python and Django, and using old school Javascript, specifically using the Leaflet library for SIG-related interface.
|
||||
|
||||
Installing uMap was simple. On a mac:
|
||||
|
||||
1. Create the venv and activate it
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -e .
|
||||
```
|
||||
2. Install the deps : `brew install postgis` (this will take some time to complete)
|
||||
|
||||
```bash
|
||||
createuser umap
|
||||
createdb umap -O umap
|
||||
psql umap -c "CREATE EXTENSION postgis"
|
||||
```
|
||||
3. Copy the default config with `cp umap/settings/local.py.sample umap.conf`
|
||||
|
||||
```bash
|
||||
# Copy the default config to umap.conf
|
||||
cp umap/settings/local.py.sample umap.conf
|
||||
export UMAP_SETTINGS=~/dev/umap/umap.conf
|
||||
make install
|
||||
make installjs
|
||||
make vendors
|
||||
umap migrate
|
||||
umap runserver
|
||||
```
|
||||
|
||||
And you're done!
|
||||
---
|
||||
|
||||
On Arch Linux, I had to do some changes, but all in all it was simple:
|
||||
|
||||
```bash
|
||||
createuser umap -U postgres
|
||||
createdb umap -O umap -U postgres
|
||||
psql umap -c "CREATE EXTENSION postgis" -Upostgres
|
||||
```
|
||||
|
||||
Depending on your installation, you might need to change the USER that connects the database.
|
||||
|
||||
The configuration could look like this:
|
||||
|
||||
```python
|
||||
DATABASES = {
|
||||
"default": {
|
||||
"ENGINE": "django.contrib.gis.db.backends.postgis",
|
||||
"NAME": "umap",
|
||||
"USER": "postgres",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## How it's currently working
|
||||
|
||||
With everything working on my machine, I took some time to read the code and understand
|
||||
the current code base.
|
||||
|
||||
Here are my findings :
|
||||
|
||||
- uMap is currently using a classical client/server architecture where :
|
||||
- The server is here mainly to handle access rights, store the data and send it over to the clients.
|
||||
- The actual rendering and modifications of the map are directly done in JavaScript, on the clients.
|
||||
|
||||
The data is split in multiple layers. At the time of writing, concurrent writes to the same layers are not possible, as one edit would potentially overwrite the other. It's possible to have concurrent edits on different layers, though.
|
||||
|
||||
When a change occurs, [each `DataLayer` is sent by the client to the server](https://github.com/umap-project/umap/blob/c16a01778b4686a562d97fde1cfd3433777d7590/umap/views.py#L917-L948).
|
||||
|
||||
- The data is updated on the server.
|
||||
- **If the data has been modified by another client**, an `HTTP 422 (Unprocessable Entity)` status is returned, which makes it possible to detect conflicts. The users are prompted about it, and asked if they want to overwrite the changes.
|
||||
- The files are stored as geojson files on the server as `{datalayer.pk}_{timestamp}.geojson`. [A history of the last changes is preserved](https://github.com/umap-project/umap/blob/c16a01778b4686a562d97fde1cfd3433777d7590/umap/models.py#L426-L433) (The default settings preserves the last 10 revisions).
|
||||
- The data is stored [in a Leaflet object](https://github.com/umap-project/umap/blob/c16a01778b4686a562d97fde1cfd3433777d7590/umap/static/umap/js/umap.js#L158-L163) and [backups are made manually](https://github.com/umap-project/umap/blob/c16a01778b4686a562d97fde1cfd3433777d7590/umap/static/umap/js/umap.js#L1095:L1095) (it does not seem that changes are saved automatically).
|
||||
|
||||
### Data
|
||||
|
||||
Each layer consists of:
|
||||
|
||||
- On one side are the properties (matching the `_umap_options`), and on the other, the geojson data (the Features key).
|
||||
- Each feature is composed of three keys:
|
||||
- **geometry**: the actual geo object
|
||||
- **properties**: the data associated with it
|
||||
- **style**: just styling information which goes with it, if any.
|
||||
|
||||

|
||||

|
||||
|
||||
## Real-time collaboration : the different approaches
|
||||
|
||||
Behind the "real-time collaboration" name, we have :
|
||||
|
||||
1. The **streaming of the changes to the clients**: when you're working with other persons on the same map, you can see their edits at the moment they happen.
|
||||
2. The ability to handle **concurrent changes**: some changes can happen on the same data concurrently. In such a case, we need to merge them together and be able to
|
||||
3. **Offline editing**: in some cases, one needs to map data but doesn't have access to a network. Changes happen on a local device and is then synced with other devices / the server ;
|
||||
|
||||
*Keep in mind these notes are just food for toughs, and that other approaches might be discovered on the way*
|
||||
|
||||
I've tried to come up with the different approaches I can follow in order to add the collaboration
|
||||
features we want.
|
||||
|
||||
- **JSON Patch and JSON Merge Patch**: Two specifications by the IETF which define a format for generating and using diffs on json files. In this scenario, we could send the diffs from the clients to the server, and let it merge everything.
|
||||
- **Using CRDTs**: Conflict-Free Resolution Data Types are one of the other options we have lying around. The technology has been used mainly to solve concurrent editing on text documents (like [etherpad-lite](https://github.com/ether/etherpad-lite)), but should work fine on trees.
|
||||
|
||||
|
||||
### JSON Patch and JSON Merge Patch
|
||||
|
||||
I've stumbled on two IETF specifications for [JSON Patch](https://datatracker.ietf.org/doc/html/rfc6902) and [JSON Merge Patch](https://datatracker.ietf.org/doc/html/rfc7396) which respectively define how JSON diffs could be defined and applied.
|
||||
|
||||
There are multiple libraries for this, and at least one for [Python](https://github.com/OpenDataServices/json-merge-patch), [Rust](https://docs.rs/json-patch/latest/json_patch/) and [JS](https://www.npmjs.com/package/json-merge-patch).
|
||||
|
||||
It's even [supported by the Redis database](https://redis.io/commands/json.merge/), which might come handy in case we want to stream the changes with it.
|
||||
|
||||
If you're making edits to the map without changing all the data all the time, it's possible to generate diffs. For instance, let's take this simplified data (it's not valid geojson, but it should be enough for testing):
|
||||
|
||||
source.json
|
||||
```json
|
||||
{
|
||||
"features": [
|
||||
{
|
||||
"key": "value"
|
||||
}
|
||||
],
|
||||
"not_changed": "whatever"
|
||||
}
|
||||
```
|
||||
|
||||
And now let's add a new object right after the first one :
|
||||
|
||||
destination.geojson
|
||||
```json
|
||||
{
|
||||
"features": [
|
||||
{
|
||||
"key": "value"
|
||||
},
|
||||
{
|
||||
"key": "another-value"
|
||||
}
|
||||
],
|
||||
"not_changed": "whatever"
|
||||
}
|
||||
```
|
||||
|
||||
If we generate a diff:
|
||||
|
||||
```python
|
||||
pipx install json-merge-patch
|
||||
json-merge-patch create-patch source.json destination.json
|
||||
{
|
||||
"features": [
|
||||
{
|
||||
"key": "value"
|
||||
},
|
||||
{
|
||||
"key": "another-value"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Multiple things to note here:
|
||||
|
||||
1. It's a valid JSON object
|
||||
2. It doesn't reproduce the `not_changed` key
|
||||
3. But… I was expecting to see only the new item to show up. Instead, we are getting two items here, because it's replacing the "features" key with everything inside.
|
||||
|
||||
This is actually what [the specification defines](https://datatracker.ietf.org/doc/html/rfc6902#section-4.1):
|
||||
|
||||
> 4.1. add
|
||||
>
|
||||
> The "add" operation performs one of the following functions,
|
||||
> depending upon what the target location references:
|
||||
>
|
||||
> o If the target location specifies an array index, a new value is
|
||||
> inserted into the array at the specified index.
|
||||
>
|
||||
> o If the target location specifies an object member that does not
|
||||
> already exist, a new member is added to the object
|
||||
>
|
||||
> o **If the target location specifies an object member that does exist,
|
||||
> that member's value is replaced.**
|
||||
|
||||
It seems too bad for us, as this will happen each time a new feature is added to the feature collection.
|
||||
|
||||
It's not working out of the box, but we could probably hack something together by having all features defined by a unique id, and send this to the server. We wouldn't be using vanilla `geojson` files though, but adding some complexity on top of it.
|
||||
|
||||
At this point, I've left this here and went to experiment with the other ideas. After all, the goal here is not (yet) to have something functional, but to clarify how the different options would play off.
|
||||
|
||||
### Using CRDTs
|
||||
|
||||
I've had a look at the two main CRDTs implementation that seem to get traction these days : [Automerge](https://automerge.org/) and [Yjs](https://github.com/yjs/yjs).
|
||||
|
||||
I've first tried to make Automerge work with Python, but the [Automerge-py](https://github.com/automerge/automerge-py) repository is outdated now and won't build. I realized at this point that we might not even need a python implementation:
|
||||
|
||||
In this scenario, the server could just stream the changes from one client to the other, and the CRDT will guarantee that the structures will be similar on both clients. It's handy because it means we won't have to implement the CRDT logic on the server side.
|
||||
|
||||
Let's do some JavaScript, then. A simple Leaflet map would look like this:
|
||||
|
||||
```typescript
|
||||
|
||||
import L from 'leaflet';
|
||||
import 'leaflet/dist/leaflet.css';
|
||||
|
||||
// Initialize the map and set its view to our chosen geographical coordinates and a zoom level:
|
||||
const map = L.map('map').setView([48.1173, -1.6778], 13);
|
||||
|
||||
// Add a tile layer to add to our map, in this case using Open Street Map
|
||||
L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
|
||||
maxZoom: 19,
|
||||
attribution: '© OpenStreetMap contributors'
|
||||
}).addTo(map);
|
||||
|
||||
// Initialize a GeoJSON layer and add it to the map
|
||||
const geojsonFeature = {
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "Initial Feature",
|
||||
"popupContent": "This is where the journey begins!"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "Point",
|
||||
"coordinates": [-0.09, 51.505]
|
||||
}
|
||||
};
|
||||
|
||||
const geojsonLayer = L.geoJSON(geojsonFeature, {
|
||||
onEachFeature: function (feature, layer) {
|
||||
if (feature.properties && feature.properties.popupContent) {
|
||||
layer.bindPopup(feature.properties.popupContent);
|
||||
}
|
||||
}
|
||||
}).addTo(map);
|
||||
|
||||
// Add new features to the map with a click
|
||||
function onMapClick(e) {
|
||||
const newFeature = {
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "New Feature",
|
||||
"popupContent": "You clicked the map at " + e.latlng.toString()
|
||||
},
|
||||
"geometry": {
|
||||
"type": "Point",
|
||||
"coordinates": [e.latlng.lng, e.latlng.lat]
|
||||
}
|
||||
};
|
||||
|
||||
// Add the new feature to the geojson layer
|
||||
geojsonLayer.addData(newFeature);
|
||||
}
|
||||
|
||||
map.on('click', onMapClick);
|
||||
|
||||
```
|
||||
|
||||
Nothing fancy here, just a map which adds markers when you click. Now let's add automerge:
|
||||
|
||||
We add a bunch of imports, the goal here will be to sync between tabs of the same browser. Automerge [announced an automerge-repo](https://automerge.org/blog/2023/11/06/automerge-repo/) library to help with all the wiring-up, so let's try it out!
|
||||
|
||||
```typescript
|
||||
import { DocHandle, isValidAutomergeUrl, Repo } from '@automerge/automerge-repo'
|
||||
import { BroadcastChannelNetworkAdapter } from '@automerge/automerge-repo-network-broadcastchannel'
|
||||
import { IndexedDBStorageAdapter } from "@automerge/automerge-repo-storage-indexeddb"
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
```
|
||||
|
||||
These were just import. Don't bother too much. The next section does the following:
|
||||
|
||||
- Instantiate an "automerge repo", which helps to send the right messages to the other peers if needed ;
|
||||
- Add a mechanism to create and initialize a repository if needed,
|
||||
- or otherwise look for an existing one, based on a hash passed in the URI.
|
||||
|
||||
```typescript
|
||||
|
||||
// Add an automerge repository. Sync to
|
||||
const repo = new Repo({
|
||||
network: [new BroadcastChannelNetworkAdapter()],
|
||||
storage: new IndexedDBStorageAdapter(),
|
||||
});
|
||||
|
||||
// Automerge-repo exposes an handle, which is mainly a wrapper around the library internals.
|
||||
let handle: DocHandle<unknown>
|
||||
|
||||
const rootDocUrl = `${document.location.hash.substring(1)}`
|
||||
if (isValidAutomergeUrl(rootDocUrl)) {
|
||||
handle = repo.find(rootDocUrl);
|
||||
let doc = await handle.doc();
|
||||
|
||||
// Once we've found the data in the browser, let's add the features to the geojson layer.
|
||||
Object.values(doc.features).forEach(feature => {
|
||||
geojsonLayer.addData(feature);
|
||||
});
|
||||
|
||||
} else {
|
||||
handle = repo.create()
|
||||
await handle.doc();
|
||||
handle.change(doc => doc.features = {});
|
||||
}
|
||||
```
|
||||
|
||||
Let's change the `onMapClick` function:
|
||||
|
||||
```ts
|
||||
function onMapClick(e) {
|
||||
const uuid = uuidv4();
|
||||
// ... What was there previously
|
||||
const newFeature["properties"]["id"] = uuid;
|
||||
|
||||
// Add the new feature to the geojson layer.
|
||||
// Here we use the handle to do the change.
|
||||
handle.change(doc => { doc.features[uuid] = newFeature});
|
||||
}
|
||||
```
|
||||
|
||||
And on the other side of the logic, let's listen to the changes:
|
||||
|
||||
```ts
|
||||
handle.on("change", ({doc, patches}) => {
|
||||
// "patches" is a list of all the changes that happened to the tree.
|
||||
// Because we're sending JS objects, a lot of patches events are being sent.
|
||||
//
|
||||
// Filter to only keep first-level events (we currently don't want to reflect
|
||||
// changes down the tree — yet)
|
||||
console.log("patches", patches);
|
||||
let inserted = patches.filter(({path, action}) => {
|
||||
return (path[0] == "features" && path.length == 2 && action == "put")
|
||||
});
|
||||
|
||||
inserted.forEach(({path}) => {
|
||||
let uuid = path[1];
|
||||
let newFeature = doc.features[uuid];
|
||||
console.log(`Adding a new feature at position ${uuid}`)
|
||||
geojsonLayer.addData(newFeature);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
And… It's working, here is a little video capture of two tabs working together :-)
|
||||
|
||||
<video controls preload="none" width="100%"
|
||||
poster="https://nuage.b.delire.party/s/kpP9ijfqabmKxnr">
|
||||
<source src="https://nuage.b.delire.party/s/kpP9ijfqabmKxnr/download"
|
||||
type="video/mp4">
|
||||
</video>
|
||||
|
||||
It's very rough, but the point was mainly to see how the library can be used, and what the API looks like. I've found that :
|
||||
|
||||
- The `patches` object that's being sent to the `handle.on` subscribers is very chatty: it contains all the changes, and I have to filter it to get what I want.
|
||||
- I was expecting the objects to be sent on one go, but it's creating an operation for each change. For instance, setting a new object to a key will result in multiple events, as it will firstly create the object, and the populate it.
|
||||
- Here I need to keep track of all the edits, but I'm not sure how that will work out with for instance the offline use-case (or with limited connectivity). That's what I'm going to find out next week, I guess :-)
|
||||
- The team behind Automerge is very welcoming, and was prompt to answer me when needed.
|
||||
- There seem to be another API `Automerge.getHistory()`, and `Automerge.diff()` to get a patch between the different docs, which might prove more helpful than getting all the small patches.
|
||||
|
||||
We'll figure that out next week, I guess!
|
BIN
content/images/datasette/hours-per-project.png
Normal file
BIN
content/images/datasette/hours-per-project.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 195 KiB |
BIN
content/images/datasette/hours-per-week.png
Normal file
BIN
content/images/datasette/hours-per-week.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 74 KiB |
BIN
content/images/umap/umap-features.png
Normal file
BIN
content/images/umap/umap-features.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 222 KiB |
BIN
content/images/umap/umap-options.png
Normal file
BIN
content/images/umap/umap-options.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 171 KiB |
|
@ -6,7 +6,7 @@ slug: index
|
|||
|
||||
👋 **Bienvenue par ici**, je suis Alexis, un développeur intéressé par les dynamiques collectives, les libertés numériques et la facilitation.
|
||||
|
||||
Vous retrouverez sur ce site mes notes hebdomadaires, quelques billets de blog, des notes de lectures et des bouts de codes que je veux garder quelque part. Bonne lecture !
|
||||
Vous retrouverez sur ce site [mes notes hebdomadaires](/weeknotes), quelques [billets de blog](/journal), des [notes de lectures](/lectures) et [des bouts de code](/code) que je veux garder quelque part. Bonne lecture !
|
||||
|
||||
Pour me contacter, envoyez-moi un email sur ``alexis@`` ce domaine (en enlevant `blog.`).
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ le [Noyau Linux](https://www.kernel.org/pelican.html) et
|
|||
: Un site web qui permet de gérer les dépenses de groupes, [créé fin
|
||||
2011](https://blog.notmyidea.org/how-are-you-handling-your-shared-expenses.html).
|
||||
Il est possible de rentrer qui à payé quoi, et pour qui, et une balance est
|
||||
gérée pour vous.
|
||||
gérée pour vous. Je maintiens une instance ouverte sur [ihatemoney.org](https://ihatemoney.org).
|
||||
|
||||
[Kinto](https://github.com/kinto/kinto)
|
||||
: Un backend générique pour des applications Web. J'ai initié ce projet avec
|
||||
|
|
69
content/weeknotes/2023-43.md
Normal file
69
content/weeknotes/2023-43.md
Normal file
|
@ -0,0 +1,69 @@
|
|||
---
|
||||
date: 2023-11-11
|
||||
headline: Une reprise après deux semaines de pause. Première semaine sur uMap, et beaucoup d'à-côtés.
|
||||
---
|
||||
|
||||
# 2023, Semaine 45
|
||||
|
||||
Une reprise après deux semaines de pause. Première semaine sur uMap, et beaucoup d'à-côtés.
|
||||
|
||||
## Ce qui s'est passé
|
||||
|
||||
**[uMap](https://umap.openstreetmap.fr/)**
|
||||
: J'ai commencé à travailler sur le projet, que je rejoins pour quelques temps, aux côté de David, Yohan, Aurélie et Sophie. L'idée est de travailler spécifiquement sur la collaboration temps réel sur les cartes. J'ai commencé par lire le code existant, puis par faire un état de l'art des solutions existantes. Prendre le temps de lire le code, de comparer les différents approches, et de mesurer leurs impacts.
|
||||
: David et Yohan m'ont proposé d'écrire de documenter mon travail, j'ai donc fait [un premier billet de blog](/adding-real-time-collaboration-to-umap-first-week.html) (en anglais) sur le sujet.
|
||||
|
||||
**[Argos](https://framasoft.frama.io/framaspace/argos/)**
|
||||
: J'ai pu échanger sur le code que j'ai fait avec Matthieu. Ses yeux sont plus avertis, et j'ai pu avoir des retours intéressants. J'ai appris entre autres l’existence d'[un projet similaire](https://github.com/mozilla-services/telescope) chez Mozilla, qui recouvre une partie des mêmes usages. Quelques pistes intéressantes la bas. Je n'ai pas encore pris le temps d'intégrer les changements, mais j'aime beaucoup l'idée de faire des revues de code de ce type.
|
||||
|
||||
**[Chariotte](https://chariotte.fr/)**
|
||||
: J'ai déployé Chariotte sur l'infrastructure d'AlwaysData, qui nous propose [un hébergement gratuit, en tant que projet open source](https://www.alwaysdata.com/fr/open-source/). Tout s'est passé très simplement et sans accrocs.
|
||||
|
||||
**[Notmyidea](https://notmyidea.org/)**
|
||||
: J'ai enfin créé la structure pour porter mon activité salariée. Pour le moment je pars sur une auto-entreprise. Le choix se fait après de nombreuses tergiversations. Je suis content d'écouter mon besoin de limiter mon investissement collectif pour le moment.
|
||||
: J'ai pris le temps de mettre à jour la page [Projets](/projets.html) pour la rendre plus lisible.
|
||||
: J'ai utilisé Datasette pour faire un suivi des heures que je passe sur chaque projet, et rédigé [un article de blog sur le sujet](/using-datasette-for-tracking-my-professional-activity.html).
|
||||
|
||||
## Des joies 🤗
|
||||
|
||||
- Ressentir mon envie de reprendre à faire de la technique après deux semaines à faire autre chose.
|
||||
- Prendre le temps de mesurer la dynamique existante d'un groupe avant de faire des propositions.
|
||||
- Réussir ne pas me laisser envahir par des émotions qui ne sont pas les miennes.
|
||||
- Réunir des amis et amis pour un atelier d'écriture. Être content d'entendre les textes des autres.
|
||||
- Prendre le temps d'une après-midi pour échanger en toute simplicité.
|
||||
- Ressentir de la joie en écoutant de la musique. C'est simple, mais ça marche tellement bien.
|
||||
- Apprécier une session de brainstorming technique.
|
||||
- Recroiser des personnes du milieu brassicole sans stresser.
|
||||
- Faire de la revue de code avec Matthieu, en profiter pour se donner des nouvelles.
|
||||
- Avoir du temps seul sans contraintes horaires et me sentir productif à ce moment là.
|
||||
- Me retrouver dans une voiture sous la pluie battante, entouré d'amies, en écoutant Clara Ysé à fond.
|
||||
- Participer à un temps d'échange de compétences.
|
||||
|
||||
## Des peines 😬
|
||||
|
||||
- Je me suis senti fatigué de passer une après-midi en réunion. Triste de ne pas l'avoir anticipé alors qu'on me l'avait servi sur un plateau.
|
||||
- Mon rythme s'améliore, mais cette semaine était trop remplie.
|
||||
- Peut-être la conséquence : je suis fatigué et en train de tomber malade.
|
||||
- Je me suis senti parfois anxieux et triste au réveil.
|
||||
- J'ai annulé un week-end qui me donnait envie pour éviter de contaminer tout le monde. À la fois frustré et content de prendre soin (de moi et des autres).
|
||||
- Je me suis senti désorganisé et « flottant » durant ma dernière semaine de pause.
|
||||
- Me sentir jugé dans une conversation et ne pas prendre la distance nécessaire avec les interlocuteurs.
|
||||
|
||||
## Vu, lu, écouté
|
||||
|
||||
- 🎬 Vu [The Old Oak](https://fr.wikipedia.org/wiki/The_Old_Oak), de Ken Loach. J'aime beaucoup la finesse du propos, j'ai fondu en larmes à la fin du film. Ça fait autant de bien que ça questionne sur mes engagements.
|
||||
- 🎬 Vu [Victoria](https://fr.wikipedia.org/wiki/Victoria_(film,_2016)) de Justine Triet.
|
||||
- 🎵 Vu en concert [Juliette](https://fr.wikipedia.org/wiki/Juliette_(chanteuse)). J'ai bien aimé certains textes, ça me donne envie de les écouter plus en détail.
|
||||
- 🎵 Vu en concert [French79](https://fr.wikipedia.org/wiki/French_79), ça fait du bien d'avoir des vibes positives !
|
||||
- 📖 J'ai commencé à lire Spin, de Robert Charles Wilson.
|
||||
- 🕸️ Un article d'Acrimed sur [le traitement du conflit israélo-palestinien dans les médias Français](https://www.acrimed.org/Conflit-israelo-palestinien-calomnies-mediatiques)
|
||||
- 🎵 J'ai découvert [Clara Ysé](https://www.youtube.com/watch?v=YvQAYUb9mu8). Quelle puissance !
|
||||
- 🎧 Écouté [Burn-out militant, mal ou symptome ?](https://www.radiofrance.fr/franceculture/podcasts/sous-les-radars/burn-out-militant-mal-ou-symptome-2399524) dans l'émission « sous les radars ».
|
||||
- 🎧 Écouté [Logement : la fin de la France des propriétaires ?](https://www.radiofrance.fr/franceculture/podcasts/sous-les-radars/logement-la-fin-de-la-france-des-proprietaires-3511615)
|
||||
|
||||
## Technique
|
||||
|
||||
- J'ai lu [un thread à propos de Overturemaps sur le forum de OSM](https://community.openstreetmap.org/t/overturemaps-org-big-businesses-osmf-alternative/6760/7), assez éclairant sur ce qu'est Overture Maps, et la potentielle menace pour OSM.
|
||||
- ▶︎ Simon Willison à fait [une présentation sur les embeddings](https://www.youtube.com/watch?v=ArnMdc-ICCM&t), comment ils fonctionnent et comment les utiliser.
|
||||
- ▶︎ Une présentation sur [le fonctionnement interne et l'état de l'art des CRDTs](https://www.youtube.com/watch?v=x7drE24geUw) par Martin Kleppmann
|
||||
- 🕸️ J'ai découvert [https://freesound.org/](Freesound..org) grace à Sam : une bibliothèque de sons libres de droits.
|
|
@ -43,11 +43,11 @@ MENU = [
|
|||
CATEGORIES_DESCRIPTION = {
|
||||
"weeknotes": (
|
||||
"Notes hebdo",
|
||||
"Chaque semaine, je fais un petit résumé de ce qui s'est passé. Cela m'aide à garder le fil de mes idées et de mes différents projets. Un bon moyen de faire un pause et d'observer la semaine sous un autre angle.",
|
||||
"Chaque semaine, je fais un petit résumé de ce qui s'est passé. Cela m'aide à garder le fil de mes idées et de mes différents projets. Un bon moyen de faire une pause et d'observer la semaine sous un autre angle.",
|
||||
),
|
||||
"lectures": (
|
||||
"Notes de lecture",
|
||||
"Quelques notes prises au détour d'une lecture, plutôt pour ne pas les oublier, et me remémorer le livre quand j'en ai besoin.",
|
||||
"Quelques notes prises au détour d'une lecture, plutôt pour ne pas les oublier et me remémorer le livre quand j'en ai besoin.",
|
||||
),
|
||||
"code": (
|
||||
"Code, etc.",
|
||||
|
|
Loading…
Reference in a new issue