Hello from 2016

It’s been more than a year since I’ve blogged. Sigh.

It’s been a good year. I learned Ruby, and I’ve become fluent enough that it’s really fun. I even wrote my first Twitter bot.

github streak

I started running, and have done enough of it that isn’t pure torture.

It looks like we’re staying in Seattle for good — we bought a house, and there’s a guest room for folks passing through — and I love it here. Work is fun, the kids are fun (if exasperating), and more and more family is nearby. The Wikipedia and free culture communities around here are great.

Lot’s more I’d like to write about — stories to tell, pictures to post — but Brighton keeps asking me to play, and I can’t put him off any longer. I think my modest goal will be for this not to be the only post in 2016.

Screencasting on Debian: Kazam is good!

I’ve periodically done screencasting and screen recording over the last few years — mostly while running Ubuntu or Debian — and it’s been an evolving pain to find a piece of GNU/Linux screen recording software that actually works. The one I’ve had the most success with is gtk-RecordMyDesktop, but it’s confusing to configure, and can be quite picky with audio sources… sometimes making it impossible to capture audio at all. There are other alternatives — byzanz, istanbul — that tend to be just as buggy or worse.

My current use case is slightly complicated: I’m doing Google Hangouts sessions with people using the web app I’ve been working on, and I want to record the video of them using it, their audio, and my audio. Basically, I want to record my user testing sessions — so far, without success, at least for audio.

The one promising project the last time I tried was Kazam, but it was still too buggy for me to use successfully.  It looks like it’s in pretty good shape now… it lets me choose the window to record, and I can add audio from both sound from speakers and microphone, with human-readable pulldowns for which speaker device and which microphone device, and it worked successfully to record a Hangout. And, it has nice file format options (including VP8/webm, which is the best option for uploading to Wikimedia Commons).

Nice work, Kazam developers!

Remembering Adrianne Wadewitz

Adrianne, skepchickal

Adrianne, skepchickal

I remember, for a long time before I met her, wondering what “a wade wit” meant.

I remember a Skype conversation, years ago. Adrianne, Phoebe, SJ and I talked for probably three hours about the gender gap on Wikipedia, late into the night. Then and always, she was relentlessly thoughtful and incredibly sharp. As superb as she was in writing, she was even better in live conversation and debate.

I remember laughing and talking and laughing and talking at Wikimania 2012. I took this picture of her that she used for a long while as a profile pic. Someone on Facebook said it looked “skepchickal”, which she loved.

I remember her unfailing kindness and generosity, indomitable work ethic, and voracious appetite for knowledge. She made me proud to call myself a fellow Wikipedian.

Scholarly societies, subscription fees, and open access

Strategic planning with historians. :-)

Strategic planning with historians. :-)

This last weekend I flew to Chicago for a two-day strategic planning meeting for the History of Science Society (see my photos). The task, for me and about 40 others historians of science, was to figure out who the society should be trying to serve and what its goals should be. One of the key issues the society is dealing with is our membership model: joining the History of Science Society (HSS) currently consists of becoming a subscriber to the society’s main publications, ”Isis” (a quarterly journal) and ”Osiris” (an annual thematic journal), which are published by University of Chicago Press. The lion’s share of the society’s budget comes from subscription fees for these journals, but individual subscriptions (from about 2200 members, and falling) make up only about a third of that revenue; institutional subscriptions, mainly from libraries who subscribe to large bundles of content from academic publishers, make up the rest. This institutional subscription revenue has actually been increasing recently for HSS. But library budgets are being increasingly squeezed, and can only absorb so much of the cost of traditional journal publishing before many start cancelling the bundles they cannot afford.

Michael Magoulias of University of Chicago Press was part of this meeting, and he submitted a report on university press publishing as part of the ‘environmental scan’ document that was sent out before the meeting. In it, he frames the option of going open access for journals outside the sciences  like those of HSS, and probably many other scholarly societies as well  as shifting the costs from libraries to individual others. Author-pays OA options (or large grants to cover traditional journal costs) are the only ones Magoulias mentioned, but that doesn’t reflect the reality of how OA publishing in the humanities is trending. In fact, there are huge numbers of journals  in humanities, as well as social sciences and mathematics  that are run entirely outside of the traditonal publishing industry. Several open source journal management platforms are available and developing rapidly. (Open Journal Systems seems to be the most widely adopted.) These are essentially DIY, digital-only options, but they can be run with *very* low infrastructure costs (perhaps a few hundred dollars per year for cloud hosting), with the usually sorts of unpaid labor of editing the journal and managing peer review. This approach may mean losing some of the fringe benefits of a high-quality traditional journal  professional typesetting and copyediting  but it doesn’t have to mean a fundamental difference in the quality of the scholarship.

But in the case of HSS  and probably other scholarly societies as well  shifting away from traditional publishing to a low-cost OA model on an free and open source platform would actually mean losing revenue as well. I’d never considered this before. The real issue, then, is not about shifting costs from libraries to individual authors. It is about libraries  through their bundled subscription fees to academic publishers  subsidizing the activities of scholarly societies (after the publishers have taken their cut). Is that how scholarly societies want to be funding themselves? I know that’s not how I want HSS to be funding itself.

OmniROM: solid Android rom, nice place for newcomers

When my last phone died in December, I decided to steer clear of contracts (so that my family could maybe get off of AT&T once all the contracts on the plan expire) and get a Nexus 5. I’ve usually used Cyanogenmod in the past, but I decided to try out the newer OmniROM this time. The Omni project started last year as a response to Cyanogenmod shifting from a completely volunteer project to a for-profit company — sort of the Canonical of the Android ecosystem. I like that the philosophy of Omni is about respecting users and adding value to the open source Android ecosystem.

One concrete difference from Cyanogenmod is that Omni encourages bug reports for from avid users. (Cyanogenmod does not take bug reports for nightly builds, even though that’s what the users who care most about new features and recent changes tend to use.) When I started using Omni, I noticed a few little things that annoyed me: inconsistent icons, and non-standard capitalization in the menu. So I filed some bugs in their bug tracker. These were minor issues, but the developers were quite responsive. The icons I complained about got fixed after a few days. So I decided to try to scratch my own itch for another bug. I followed their guide for getting set up as a developer, and then I submitted patches to fix the capitalization problems I had noticed. (All I did was change a few strings.) All my patches got merged within a few days of submission. :)

OmniROM is still a small project, but so far I think it’s a great place for newcomers who want to try out open source Android development.

 

Harry Potter notes

We just watched Harry Potter and the Goblet of Fire. At the end, Brighton said:

I feel bad about what happened to Harry Potter. It hurts in my heart.

Also, I love Dumbledore’s eulogy for Cedric Diggory:

…exceptionally hard working, infinitely fair-minded, and most importantly, a fierce, fierce friend.

Something to aspire to.

all IRC, all the time

For a while now, I’ve been chasing the holy grail of IRC: running my own low-power, always-on server that I can connect to from anywhere. After 6+ weeks of uptime, I’m ready to declare my current solution a success. I’m running quassel-core (and other things) on a mini stick computer, and I can connect to it from home or remotely, by desktop, laptop, or Android device (by wifi or data connection).

There are plenty of ways to do this, but it’s taken a while for me to work it out, so I wanted to write it up.

TL;DR: Get at least a dual-core ARM device that runs Linux, install quassel-core and create a PostegreSQL database for it, then use Quassel on your computers and Quasseldroid on Android devices to connect. Use an SSH tunnel to the device when you’re away from your home network.

First, the device to run it on. Just about any ARM device that can run Linux takes care of the low-power part — a rooted Android phone, a Raspberry Pi, or some other mini Android or Linux device. If you can get Debian or some other full distro running (whether natively or as a loop device inside Android), you can probably run quassel-core. But you probably want at least a dual-core device for decent performance. Here is what I’ve tried:

  • Droid Incredible (1 Ghz single core, CyanogenMod with Debian running in a chroot): This worked moderately well, but it was not completely reliable. Every week or so on average, I would have to manually restart it, and it would often have an annoying delay between sending/receiving messages and them appearing in my clients. (I suspect this is an i/o bottleneck.)
  • Raspberry Pi (on raspbmc): Terrible delay, often topping 5 or 10 seconds. While some have reported running quassel-core without issue, others have found it unusably laggy. Anecdotally, this is probably the result of slow SD cards (like the class 4 I was using); I was also using SQLite, and never tested if PostgreSQL would perform better. If you want to try it on an RPi, try a lean and clean distro on a reasonably speedy SD card.
  • MK808B (1.6 Ghz dual core rk3066, Android 4.1 running Debian via ‘Lil Debi): This worked quite well, even while simultaneously serving as a device for Netflix and other apps on my TV. The downside is that the network connection would become unresponsive pretty often. (The connection was more stable when I didn’t use any extraneous apps, and I expect that a device like this would do great with a native Linux distro… if you can it running and get the wifi working. Some of the many similar devices have wifi working on native Linux distros.)
  • GK802 (1.2 Ghz quad core Freescale i.MX6, running Debian with a 3.0.35 kernel): This is my current device, which has been great. They go for $65-70 right now (I got mine on sale for $60 from geekbuying.com). While there are faster and more efficient quad-core sticks with rk3188 processors for a little less money, some of which have working Linux distros, I went with this one because Freescale has a reputation for good documentation and for playing nicely with the free software world, and because — similar to a Raspberry Pi — the “internal” memory is a micro SD card, so you don’t have to worry as much about bricking it. There’s a nice little Debian installer I used to get up and running, and you can also run Ubuntu on it pretty easily. After pinning udev, I was able to upgrade to Jessie without too much trouble. The class 4 micro SD card I’m using is probably the main system bottleneck, but it’s been able to handle Quassel just fine. Update 2014-09-12: You can get a more up-to-date kernel now. See here, particularly the links in the comments. And if you run into trouble, visit #imx6-dev on freenode IRC.

Second, quassel-core and the database. You’ll probably want the latest version, so on Debian you’ll probably want to use Testing (currently, “Jessie”). For the database, you should definitely use PostgreSQL instead of SQLite. SQLite will work well at first, but as the database grows it will take longer and longer for clients to connect and receive the backlog. Eventually, when my SQLite database hit about 180mb on my MK808B, it wouldn’t connect at all. With six weeks of Postgres so far, I’ve seen no degraded performance as the database grows. Unlike with SQLite, though, you’ll need to do some command line work before you can set up the client.

Third, the clients. The desktop Quassel client is available on just about any system, and you’ll need to use it at least to set up your database initially. Quasseldroid is the Android client, which I like a lot. There’s also iQuassel for iOS, which I haven’t tried. Even on a modest Android device, you can pull tens of thousands of backlog messages in a few seconds.

Fourth, remote connections. There are many ways you could connect remotely to your quassel-core, but the one I’ve been using is just to SSH into the device, so that I can restrict password logins and use only ssh keys. On my Ubuntu laptop, I do a socks proxy for the Quassel port, like this:

ssh -D 4242 root@<my-server-ip-address>

On (rooted) Android devices, I use SSH Tunnel by Max Lv, which lets you route individual apps through SSH. In that case, I tunnel the Quasseldroid app to the Quassel port on my server, and then set up the connection within Quasseldroid to “localhost”.

If you don’t have root, or you use iOS, or you just want a simpler setup, you could just allow connections from the Internet straight to your Quassel port.

All this may seem pretty complicated, but once you get it set up it’s extremely usable. (Any questions about the details, just ask.)