Leaving Facebook for 2022

For the new year, I’m deleting my Facebook account. I won’t rehash the reasons in details, but the short of it is that Facebook is incredibly good at following people everywhere they go on the internet, and it warps the web into merely a network for selling you stuff and selling access to your eyeballs. Facebook has an enormous amount of power, and has proven that it can’t be trusted with that power. Deleting my account is one of the few ways I can push back. The more people that do this, the less completely Facebook can claim to be the be-all-end-all of social networks, and the easier the choice to leave will be for the next person.

Deleting my account isn’t an easy choice, because I really do value Facebook’s core feature of connecting me with people I know and like and love. That’s why I think it’s important to find and focus on other ways of staying connected.

I’m going to try out one way of staying connected: the open source, decentralized social network software Diaspora. I’ve set up my own server — diaspora.ragesoss.com — and I hope to build back at least some of my digital connections there. (The basic idea of Diaspora is that it’s a social network where the users are in complete control of the software. Anyone on one Diaspora server (a “pod”) can make connections with friends on any other Diaspora server, so it doesn’t really matter which one you call home. But since I know enough to run a server myself, I want to give it a spin!)

Please join me, and try to make something work that isn’t Facebook! Email me — sage@ragesoss.com — if you want an invite. It’ll be quiet at first, but I’ll try to post updates about me and my family to get updates flowing.

The other thing I’m aspiring to do, since I’ve download all my Facebook data, is to go through everyone on my Friends list and send a personal message to as many people as possible, to let friends know what they mean to me (and to invite them to try out a non-Facebook social network along with me).

A yak shave turned good: Switching from Poltergeist to Headless Chrome for Capybara browser tests

I just finished up migrating all the Capybara feature tests in my Rails/React app from Poltergeist to Headless Chrome. I ran into a few issues I didn’t see covered in other write-ups, so I thought I’d pull together what I learned and what I found useful.

This pull request shows all the changes I had to make.

Motivation

Poltergeist is based on the no-longer-maintained PhantomJS headless browser. When we started using PhantomJS a few years ago, it was the only good way to run headless browser tests, but today there’s a good alternative in Selenium with Chrome’s headless mode. (Firefox has a headless mode now as well.) I briefly tried out a switch to headless Chrome a while ago, but I ran into too many problems early on and gave up.

This time, I decided to try it again after running into a weird bug — which I still don’t know the cause of. (Skip on down if you just want the migration guide.) This was a classic yak shave…

I got a request to add support for more langauges for one a particular feature, plots of the distribution of Wikipedia article quality before and after being worked on in class assignments. The graphs were generated with R and ggplot2, but this was really unreliable on the international version of the app. To get it working more reliably, I decided to try to reimplement the graphs client-side, using Vega.js. The type of graph I needed — kernel density estimation — was only available in a newer version of Vega, so first I needed to update all the other Vega plots in the app to work on the latest version of Vega, which had a lot of breaking changes from the version we’d been on. That was tough, but along the way I made some nice improvements to the other plots along with a great-looking implementation of the kernel density plot. But as I switched over to the minified versions of the Vega libraries and got ready to call it done, all of a sudden my feature specs were all failing. They passed fine with the non-minified versions, but the exact same assets in minified form — the distributed versions, straight from a CDN — caused Javascript errors that I couldn’t replicate in a normal browser. My best guess is that there’s some buggy interaction between Vega’s version of UglifyJS and the JS runtime in PhantomJS, which is triggered by something in Vega. In any case, after failing to find any other fixes, it seemed like the right time to take another shot at the Poltergeist → headless Chrome migration — which I’m happy to say, worked out nicely. So, after what started as an i18n support request, I’m happy to report that my app no longer relies on R (or rinruby to integrate between R and Ruby) and my feature tests are all running more smoothly and with fewer random failures on headless Chrome.

😀

Using R in production was fun and interesting, but I definitely won’t be doing it again any time soon.

If you want to see that Vega plot that started it all, this is a good place to look. Just click ‘Change in Structural Completeness’. (Special thanks to Amit Joki, who added the interactive controls.)

Setting up Capybara

The basic setup is pretty simple: put selenium-webdriver and chromedriver-helper in the Gemfile, and then register the driver in the test setup file. For me it looked like this:

[ruby]
Capybara.register_driver :selenium do |app|
  options = Selenium::WebDriver::Chrome::Options.new(
    args: %w[headless no-sandbox disable-gpu --window-size=1024,1024]
  )
  Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
end
[/ruby]

Adding or removing the headless option makes it easy to switch between modes, so you can pop up a real browser to watch your tests run when you need to debug something.

Adding the chrome: stable addon in .travis.yml got it working on CI as well.

Dealing with the differences between Poltergeist and Selenium Chromedriver

You can run Capybara feature tests with a handful of different drivers, and the core features of the framework will work with any driver. But around the edges, there are some pretty big differences in behavior and capabilities between them. For Poltergeist vs. Selenium and Chrome, these are the main ones that I had to address during the migration:

More accurate rendering in Chrome

PhantomJS has some significant holes in CSS support, which is especially a problem when it comes to misrendering elements as overlapping when they should not be. Chrome does much more accurate rendering, closely matching what you’ll see using Chrome normally. Relatedly, Poltergeist implements .trigger('click'), which unlike the normal Capybara .click , can work even if the target element is underneath another one. A common error message with Poltergeist points you to try .trigger('click') when the normal .click fails, and I had to swap a lot of those back to .click.

Working with forms and inputs

The biggest problem I hit was interacting with date fields. In Poltergeist, I was using text input to set date fields, and this worked fine. Things started blowing up in Chrome, and it took me a while to figure out that I needed provide Capybara with Date objects instead of strings to make it work. Capybara maintainer Thomas Walpole (who is incredibly helpful) explained it to me:

fill_in with a string will send those keystrokes to the input — that works fine with poltergeist because it doesn’t actually support date inputs so they’re treated as standard text inputs so the with parameter is just entered into the field – Chrome however supports date inputs with it’s own UI.
By passing a date object, Capybara’s selenium driver will use JS to correctly set the date to the input across all locales the browser is run in. If you do want to send keystrokes to the input you’d need to send the keystrokes a user would have to type on the keyboard in the locale the browser is running in —In US English locale that would mean fill_in(‘campaign_start’, with: ’01/10/2016’)

Chromedriver is also pickier about which elements you can send input to. In particular, it must be focusable. With some trial and error, I was able to find focusable elements for all the inputs I was interacting with.

Handling Javascript errors

The biggest shortcoming with Selenium + Chrome is the lack of support for the js_errors: true option. With this option, a test will fail on any Javascript error that shows up in the console, even if the page is otherwise meeting the test requirements. Using that option was one of the main reasons we switched to Poltergeist in the first place, and it’s been extremely useful in preventing bugs and regressions in the React frontend.

Fortunately, there’s a fairly easy way to hack this feature back in with Chrome, as suggested by Alessandro Rodi. I modified Rodi’s version a bit, adding in the option to disable the error catching on individual tests — since a few of my tests involve testing error behavior. Here’s what it looks like, in my rails_helper.rb:

[ruby]
  # fail on javascript errors in feature specs
  config.after(:each, type: :feature, js: true) do |example|
    errors = page.driver.browser.manage.logs.get(:browser)
    # pass `js_error_expected: true` to skip JS error checking
    next if example.metadata[:js_error_expected]

    if errors.present?
      aggregate_failures 'javascript errrors' do
        errors.each do |error|
          # some specs test behavior for 4xx responses and other errors.
          # Don't fail on these.
          next if error.message =~ /Failed to load resource/

          expect(error.level).not_to eq('SEVERE'), error.message
          next unless error.level == 'WARNING'
          STDERR.puts 'WARN: javascript warning'
          STDERR.puts error.message
        end
      end
    end
  end
[/ruby]

Different behavior using matchers to interact with specific parts of the page

Much of the Capybara API is driven with HTML/CSS selectors for isolating the part of the page you want to interact with.

I found a number of cases where these behaved differently between drivers, most often in the form of Chrome reporting an ambiguous selector that matches multiple elements when the same selector worked fine with Poltergeist. These were mostly cases where it was easy enough to write a more precise selector to get the intended element.

In a few cases with some of the intermittently failing specs, Selenium + Chrome also provided more precise and detailed error messages when a target element couldn’t be found — giving me enough information to fix the badly-specified selectors that were causing the occasional failures.

Blocking external urls

With Poltergeist, you can use the url_blacklist option to prevent loading specific domains. That’s not available with Chromedriver. We were using it just to reduce unnecessary network traffic and speed things up a bit, so I didn’t bother trying out the alternatives, the most popular of which seems to be to use Webmock to mock responses from the domains you want to block.

Status code

In Poltergeist, you can easily see what the HTTP status code for a webpage is: page.status_code. This feature is missing altogether in Selenium. I read about a few convoluted ways to get the status code, but for my test suite I decided to just do without explicit tests of status codes.

Other useful migration guides and resources

There are a bunch of blog posts and forum threads on this topic, but the two I found really useful are:

* https://about.gitlab.com/2017/12/19/moving-to-headless-chrome/
* https://github.com/teamcapybara/capybara/issues/1860

I am 💯 and you can too.

100% test coverage! Catch all the bugs!

I’ve been working on a Ruby on Rails app for more than three years, and Ruby coverage for its rspec test suite has been at 100% for most of the last year. 😀

People with more experience may try to tell you this is a bad idea. Test what you need to test. 90% is a good rule of thumb. If you’re doing TDD right, you’ll be testing based on needs, and coverage will take care of itself.

No! Do it!

Why?

  • You’ll learn a lot about Rails, your gem dependencies, and your test tools.
  • It’ll help you write better tests and more testable code.
  • You will find bugs you didn’t know about!
  • It’s fun! Getting those tricky bits tested is a good puzzle.
  • Once you get there, it’s easy to maintain.

Okay, but how?

Lies!

You may have heard people say that code coverage is a lie. It’s true. For example, there are some tricksy ways to add specs for rake tasks to your test suite, but because rake tasks start from a different environment, they don’t integrate cleanly with simplecov. That’s why we exclude rake tasks from the coverage metrics in my app, even though we do test them.

Sadly, even once you reach 💯, it may be tough to keep it in the long run. Ruby 2.5 introduced a big change in the code coverage API. With that change, it has become possible for code coverage tools to report Branch Coverage rather than just line coverage, so the tricks with conditional modifiers and ternary operator may still not get you to 100% in the future. (The deep-cover gem also does this, without requiring Ruby 2.5.) But don’t fret; you’ll probably be able to do something like pin your coverage gem to the last version that just uses line coverage. A small price to pay for the peace of mind of knowing your app is 100% bug free. Or maybe it’s just a number to brag about. Either way, worth it!

 

Obviously this is not the case. But I still recommend trying to get to 100% coverage, for the reasons listed above.

Remote pairing with Teletype: worth trying

Outreachy intern maudite and I tried out remote pair programming with the new Teletype plugin for Atom, and it’s really nice! At this point, it only works with Atom, and doesn’t yet support voice or video or browser sharing — we used Hangouts for that — but for the core features of working through code together, it does a really nice job.

In the future, Teletype may also allow cross-editor collaboration, so that each person can use whatever code editor (and fonts, and keybindings, etc) that they prefer.

First impressions of Ruby branch coverage with DeepCover

Branch coverage for Ruby is finally on the horizon! The built-in coverage library is expected to ship in Ruby 2.5 with branch and method coverage options.

And a pure-Ruby gem is in development, too: DeepCover.

I gave DeepCover a try with my main project, the Wiki Education Dashboard, and the coverage of the Ruby portions of the app dropped from 100% to 96.75%. Most of that is just one-line guard clauses like this:

[ruby]return unless Features.wiki_ed?[/ruby]

But there are a few really useful things that DeepCover revealed. First off, unused scopes on Rails ActiveRecord models:

[ruby]
class Revision < ActiveRecord::Base
scope :after_date, ->(date) { where(‘date > ?’, date) }
end
[/ruby]

Unlike default line coverage, DeepCover showed that this lambda wasn’t ever actually used. It was dead code that I removed today.

Similarly, DeepCover shows where default arguments in a method definition are never used, like this:
[ruby]
class Article < ActiveRecord::Base
def update(data={}, save=true)
self.attributes = data
self.save if save
end
end
[/ruby]

This method was overriding the ActiveRecord update method, adding an option to update without saving. But we no longer use that second argument anywhere in the codebase, meaning I could delete the whole method and rely on the standard version of update.

I want a better social network

Facebook kinda sucks, and it’s not doing much to foster an informed and politically engaged citizenry. It certainly doesn’t help me to be a better citizen. Here’s what a better social network might look like.

Incentives for political engagement

Likes and comments from friends are the main drivers of both the creation of new posts and the spread of content through the newsfeed. I post things because it’s nice to feel liked and loved and to have people interested in what I have to say. Things that inspire strong emoji and pile-on comments are the most likely to earn me likes, and also the most likely to show up in my feed.

Imagine, instead, if local political engagement — showing up to a town council meeting, or calling my state legislator about a bill currently in discussion, or reporting a pothole — was the currency of your social network. I want something like the Sunlight Foundation’s tools in the middle of my online social experience. I want to see what my friends are saying, but also what they’re doing — especially when it’s something I can join in on.

Maybe streaks, like GitHub had?

Whatever the mechanisms, the things that are satisfying and addicting on a better social network should be the things that are also good for people.

Tools for collaboration

Discussions on Facebook, even when it comes to long-term issues of public importance, are ephemeral. There’s no mechanism for communities and networks to build and curate shared knowledge and context.

Local community wikis (like the handful of successful ones on localwiki.org) are still a good idea, they just lack critical mass. They would work if integrated into a better social network.

For non-local things — the quality of news sources, organizations, and everyday consumer issues — something more like aggregate reviews should be part of the system.

No ads

A big, distracting part of my Facebook feed is the ads and promoted stories. These are mostly extra-clickbait-y, ad-heavy versions of the same kinds of clickbait showing up in my feed anyway. More fundamentally, showing ads is what Facebook is designed for. Everything that is done to make it interesting and addicting and useful is ultimately an optimization for ad revenue. When one change user experience change would improve the well-being of users and another lead to 1% more ad impressions, Facebook will take the ad-driven path every time.

A better social network wouldn’t have ads.

Free software that respects privacy

Obviously, being able to get your data out and move it to another host would be a feature of an ideal social network. If the people who run it start doing things against your interests, you should have better alternatives than just signing off and deleting everything.


 

To recap: I want take Facebook, Nextdoor, Sunlight Foundation, Wikipedia, and lib.reviews, smash them all together into a great user experience and an AGPL license, and kill Facebook.

Now is the perfect time to take another shot at it. If there’s anyone working on something like this seriously, sign me up to help.

Moving forward after the election

I’ve been mulling over what happened in the election, and what I should do now.

I think these things are key:

  • Just a small difference in turnout would have turned the election around. Any conclusions about the American people that we draw based on that election would still hold true if Hillary had won. Racism, misogyny, xenophobia? They’ve been major components of our culture all along, and that would still be the case if Trump had lost.
  • Michael Moore’s explanation of why Trump would win, in retrospect, was pretty dead on. But there’s one thing I think he gets wrong: “The left has won the cultural wars.” We’re winning — with bare majorities — on some vital issues, but we haven’t won yet. Winning would mean a candidate like Trump would never have had a chance.
  • Education is one of the things that makes the biggest difference, and it’s one we can change. In recent elections, education has not been a great differentiator of voting Republican vs. Democrat. This time it was. Combine that with age and it’s even more dramatic. This is closely related to progress in the culture wars; especially in the case of younger college graduates, there’s a shared vocabulary to talk about social justice and privilege, and a conceptual framework that has become part of our everyday social and political lives rather just abstract academic jargon.

So here’s my strategy:

  • Give money now to the organizations that can help mitigate the short-term damage. The ACLU, Planned Parenthood, and independent investigative journalism are at the top of my list to fight back against abuses of power and help vulnerable people.
  • Pour my energy into things that will help in the longer term. For me, that means my day job at Wiki Education Foundation: helping professors and their students to improve Wikipedia in the areas that really matter.

If you’re a technologist, find ways to use technology to fix democracy. I’ve got some ideas — we need a better social network than Facebook, one that drives reality-based discussion and concrete political action rather than ad clicks — but I’ll save that for another post.

If you’re web developer and you want to volunteer for an open source project that makes a concrete difference in education and public knowlege, I’ve got plenty of cool things you could do for Wiki Education Foundation’s platform. It’s Ruby on Rails and React, and every day professors and college students are using it to improve coverage of important topics on Wikipedia.

Diderot — a Pebble watchface for finding nearby unillustrated Wikipedia articles

photo-nov-05-2-52-49-pmI published a watchface for Pebble smartwatches that shows you the nearest Wikipedia article that lacks a photograph. Have a Pebble and like to — or want to ­— contribute to Wikipedia? Try it out! It’s called Diderot. (Collaborators welcome!)

After using it myself for about a month and a half, I’ve finally added photographs to all the Wikipedia articles near my house within the range of Wikipedia’s ‘nearby’ API.

Extra thanks go to Albin Larrson, who built the WMF Labs API that my app uses to find nearby unillustrated articles. The great thing about it is that it filters out articles that have .png or .svg images, so you still find the articles that have only a map or logo rather than a real photograph.

Rails migrations and Capistrano don’t mix

Last night I learned the hard way what happens when Rails migrations break.

My main project, the Wiki Ed Dashboard, is set up for automatic deployment — via Capistrano and travis-ci— whenever we push new commits to the staging or production branch. It’s mostly nice.

But I ran some migrations yesterday that I shouldn’t have. In particular, this one added three new columns to a table. When I pushed it to staging, the migration took about 5 minutes and then finished. Since the staging app was unresponsive during that time, I waited until the evening to deploy it to production. But things went much worse on production, which has a somewhat large database. The migration took more than 10 minutes — at which point, travis-ci decides that the build has become unresponsive, and kills it. The migration didn’t complete.

No problem, I thought, I’ll just run the migration again. Nope! It tursn out that the first column from that migration actually made it into the MySQL database. Running it again triggered a duplicate column error. Hmmm… okay. Maybe all the columns got added, but the migration didn’t get added to the database? So I manually added the migration id to the schema_migrations table. Alas, no. Things are still broken, because the other two columns didn’t actually get added.

That’s why Rails migrations have an up and a down version, right? I’ll just migrate that one down and back up. But with only one of three columns present, neither the up nor the down migration would run. I ended up writing an ad-hoc migration to add just the second and third columns, deploying it from my own machine, and then deleting the migration afterwards. I fixed production, but it wasn’t pretty.

My takeaway from this: if you deploy via Capistrano — and especially if you deploy straight from your continuous integration server — then write a separate migration for every little thing. When things go bad in the middle of deployment, you don’t want to be stuck with a half-completed migration.