RFC: Outernet Wiki Drive

So I left this to brew in my head for a while now, and it’s time to do a proper braindump.

The main idea is to have a completely community-edited archive, which is represented on Internet by a wiki-like storage with web-based interface.

The storage would consist of actual content and its metadata in a revision-tracked database.

The interface would provide everyone with an account access to perform actions on wiki drive. Possible actions include categorizing content, adding language tags, editing titles, modifying the content itself with updated files, etc.

Each of the actions that can be performed by a user does not take effect immediately. This gives other users a chance to confirm the action made by another user (sort of approve it) or perform a different or opposite action. When actions by two or more users are in conflict, the interface automatically opens up a discussion where users can reach an agreement. Once agreement is reached, all users will be asked to confirm the action that has been agreed upon. If no agreement is reached within a day, wiki drive automatically reverts all actions.

Curators (initially chosen by Outernet, later community-picked) oversee the actions in the wiki drive and approve actions that are conflict-free. Curators will still have the ability to deny actions even if they don’t have any conflicts, according to Outernet’s content guidelines (which will be published soon). Curators do not have the ability to perform any actions on the wiki drive, though, so they are strictly in oversight capacity.

Once a day, a cron job will sync all approved edits to the community library on Outernet.

To the extent allowed by security and privacy concerns, as well as common sense, wiki drive will have complete activity logs on a publicly accessible page and/or in downloadable format.

While we are going to work on wiki drive for sure, I would like to hear your comments first. Also, ideas on how to make this work today with existing tools, would also be nice.

I moved 9 posts to a new topic: Bandwidth issues when selecting content for Outernet

Given that many people in your core use cases will not be able to participate in the system, it might be possible to have some kind of points system to encourage content for the use cases you want to support.

Where a vote from a (Probably English speaking, probably western) internet user is equal to 1 point.

For sake of argument;

About a ‘developing country’ + 500 points
About a least developed country + 1000 points
In French + 800 points (North Africa)
In Arabic + 800 points (Africa & Middle east)
In Mandarin + 800 points
In Spanish + 500 points (South America)
In additional other languages + 400 points per language
Written for children + 600 points
Written by or for women + 300 points
Potentially lifesaving + 500 points

You could then say that to be broadcast content must have a minimum of 2000 points.

This would give people a good incentive to Translate content to other languages, particularly those you are targeting.

1 Like

So I knew the guys who started Appropedia, and heard a lot of the headaches early on in the battles with the trolls. One thing I’d caution is that justa few jerks can spoil a whole mess of hard work by a host of committed volunteers. I’d caution against any sort of automated publishing system without some human moderation. This will be especially critical as the size of th archive grows near the limits of what can easily be downloaded in available satelite passes. Curation is key.

2 Likes

I have been having a look around for tools that might help with this.

I think this is worth looking at https://www.penflip.com/sam_uk/outernet

More on Penflip: http://www.madebyloren.com/github-for-writers

The clever bit would be that it provides git access, you you could point Transifex Help Center at your master git repository and have all your changes automatically synced into a bit of Translation software. Transifex is free to use for Opensource projects.

1 Like

Yeah, I’m aware there’s git, but revision-tracking is just one thing. When we say “content”, it’s really an arbitrary directory structure with a few requirements like,

  • it has to be identified by unique ID
  • it needs to be inside a folder that is named after the ID
  • there needs to be an index.html that serves as entry point
  • it needs to have metadata

If you unpack content from archive.outernet.is, you’ll see it contains info.json, and what that file looks like.