Using MoveOn for petitions

It's been a long time since I started a petition to try to change something in my world. But in recent weeks my local City Council has been threatening to do some silly things related to funding the development of bike and pedestrian paths here, and I'd heard enough people say informally that they were concerned by those threats that I decided it was time to create a central spot where they could put all their names for Council members to see.

And that's how I ended up using the petitions.moveon.org service, which has turned out to be excellent for this purpose.

A few things in particular that I like about it:

  • While clearly scaled to support national and state level petitions, the MoveOn tool did a great job of enabling a smaller petition targeted at a local legislative body that might not otherwise be in their system. I was able to enter the names and email addresses of my local Council members as "targets" of the petition, and they then were set up to receive deliveries of signatures directly.
  • Related to that, the MoveOn system allowed the targets of the petition to respond directly to the petition signers with a message, without giving them direct access to each others` private contact information.
  • The system automatically picks small signature goals to start with and then scales them up as new milestones are hit. I think this helps avoid the awkward "WE'RE GOING TO HAVE A HUNDRED MILLION SIGNATURES HERE!" declarations by petition creators that quickly yield disappointment.
  • The system offered up interesting summary stats about where signatures were coming from and what activity on the petition looked like over time.
  • When I had to contact MoveOn's petition support (one of the signers had accidentally left out a word in a comment that significantly changed the meaning, and wanted it corrected) they were fast to respond and provided a quick solution.
  • Other features in the petition tool, like handling delivery via print and email, contacting petition signers, "declaring victory," and more seemed really well designed; simple, effective, built for bringing about real action.
  • The petition application is open source and available for anyone to contribute to it.

One of the things I wanted to do as the signature numbers climbed and as I prepared to present the petition to Council was create something that visualized the signing names in one place. The signature count on the petition itself was not super prominent, and in only displaying 10 names at a time it was easy to miss out on the sense of a large part of the local population making a clear statement about what they want.

So I sniffed the XMLHttpRequests being made by the MoveOn site and found the underlying API that was being used to load the signature names. I whipped up a simple PHP script that queries that API to fetch all the names, and then does some basic cleanup of the list: leaving out anything that doesn't look like a full name, making capitalization and spacing consistent, sorting, etc.

I published the tool online at https://github.com/ChrisHardie/moveon-petition-tools in case anyone else might find it useful. (I later learned that MoveOn makes a CSV export of signatures and comments available when you go to print your petition, so that's an option too.)

Using the output of my tool, I created this simple graphic that shows all of the signed names to date:

All in all the MoveOn petition platform has been great, and I think it's made a difference just the way I wanted it to. I highly recommend it.

Gluing the web together with a personal webhook server

I mentioned earlier that I'm using a personal webhook server as glue between various online tools and services, and this post expands on that.

Why It Matters

For the open web to work well enough to bring people back to it, information has to be able to come and go freely between different systems without relying on centrally controlled hubs.

Just like RSS feeds, REST APIs and other tools that allow software programs to talk to each other across disparate systems, webhooks are an important part of that mix. Whereas many of those tools are "pull" technologies - one system goes out and makes a request of another system to see what's been added or changed - webhooks are a "push" technology, where one system proactively notifies another about a change or event that's occurred.

You might be using webhooks right now! Popular services like IFTTT and Zapier are built in part on webhooks in order to make certain things happen over there when something else has happened over here. "Flash my lights when my toaster is done!" And while I have great respect for what IFTTT and Zapier do and how they do it, I get a little uncomfortable when I see (A) so much of the automated, programmable web flowing through a small number of commercially oriented services, and (B) software and service owners building private API and webhook access for IFTTT and Zapier that they don't make available to the rest of us:

So I decided that just as I host RSS feeds and APIs for many of the things I build and share online (mostly through WordPress, since it makes that so easy), I also wanted to have the ability to host my own webhook endpoints.

How I Set It Up

I used the free and open source webhook server software. (It's unfortunate that they chose the same name as the accepted term for the underlying technical concept, so just know that this isn't the only way to run a webhook server.) I didn't have a Go environment set up on the Ubuntu server I wanted to use, so I installed the precompiled version from their release directory.

I created a very simple webhook endpoint in my initial hooks.json configuration file that would tell me the server was working at all:

[
  {
    "id": "webhook-monitor",
    "execute-command": "/bin/true",
    "command-working-directory": "/var/tmp",
    "response-message": "webhook is running",
  }
]

Then I started up the server to test it out:

/usr/local/bin/webhook -hooks /path/to/hooks.json -verbose -ip 127.0.0.1

This tells the webhook server to listen on the server's localhost network interface, on port 9000. When I ping that endpoint I should see a successful result:

$ curl -I http://localhost:9000/hooks/webhook-monitor
HTTP/1.1 200 OK
...

From there I set up an nginx configuration to make the server available on the Internet. Here are some snippets from that config:

upstream webhook_server {
    server 127.0.0.1:9000;
    keepalive 300;
}

limit_req_zone $request_uri zone=webhooklimit:10m rate=1r/s;

server {
    ...

       location /hooks/ {
           proxy_pass http://webhook_server;
           ...

        limit_req zone=webhooklimit burst=20;
    }

    ...
}

This establishes a proxy setup where nginx passes requests for my public-facing server on to the internally running webhook server. It also limits them to a certain number of requests per second so that some poorly configured webhook client can't hammer my webhook server too hard.

I generated a Let's Encrypt SSL certificate for my webhook server so that all of my webhook traffic will be encrypted in transit. Then I added the webhook startup script to my server's boot time startup by creating /etc/init/webhook.conf:

description "Webhook Server"
author "Chris Hardie"

start on runlevel [2345]
stop on runlevel [!2345]

setuid myuser

console log

normal exit 0 TERM

kill signal KILL
kill timeout 5

exec /usr/local/bin/webhook -hooks /path/to/hooks.json -verbose -ip 127.0.0.1 -hotreload

The hotreload parameter tells the webhook server to monitor for and load any changes to the hooks.json file right away, instead of waiting for you to restart/reload the server. Just make sure you're confident in your config file syntax. 🙂

After that, service start webhook will get things up and running.

To add new webhook endpoints, you just add on to the JSON configuration file. There are some good starter examples in the software documentation.

I strongly recommend using a calling query string parameter that acts as a "secret key" before allowing any service to call one of your webhook endpoints. I also recommend setting up your nginx server to pass along the calling IP address of the webhook clients and then match against a whitelist of allowed callers in your hook config. These steps will help make sure your webhook server is secure.

Finally, I suggest setting up some kind of external monitoring for your webhook server, especially if you start to depend on it for important actions or notifications. I use an Uptimerobot check that ensures my testing endpoint above returns the "webhook is running" string that it expects, and alerts me if it doesn't.

If you don't want to go to the trouble of setting up and hosting your own webhook server, you might look at Hookdoo, which hosts the webhook endpoints for you and then still allows you to script custom actions that result when a webhook is called. I haven't used it myself but it's built by the same folks who released the above webhook server software.

How I Use It

So what webhook endpoints am I using?

My favorite and most frequently used is a webhook that is called by BitBucket when I merge changes to the master branch of one of the code repositories powering various websites and utilities I maintain. When the webhook is called it executes a script locally that then essentially runs a git pull on that repo into the right place, and then pings a private Slack channel to let me know the deployment was successful. This is much faster than manually logging into my webserver to pull down changes or doing silly things like SFTPing files around. This covers a lot of the functionality of paid services like DeployBot or DeployHQ, though obviously those tools offer a lot more bells and whistles for the average use case.

I use a version of this webhook app for SmartThings to send event information from my various connected devices and monitors at home to a database where I can run SQL queries on them. It's fun to ask your house what it's been up to in this way.

For the connected vehicle monitoring adapter I use, they have a webhook option that sends events about my driving activity and car trips to my webhook server. I also store this in a database for later querying and reporting. I have plans to extend this to create or modify items on my to-do list based on locations I've visited; e.g. if I go to the post office to pick up the mail, check off my reminder to go to the post office.

I've asked the Fastmail folks to add in support for a generic webhook endpoint as a notification target in their custom mail processing rules; right now they only support IFTTT. There are lots of neat possibilities for "when I receive an email like this, do this other thing with my to-do list, calendar, smart home or website."

Speaking of IFTTT, they do have a pretty neat recipe component that will let you call webhooks based on other triggers they support. It's a very thoughtful addition to their lineup.

Conclusion

I'm not expecting many folks to go out and set up a webhook server; there's probably a pretty small subset of the population that will find this useful, let alone act on it. But I'm glad it's an option for someone to glue things together like this if they want to, and we need to make sure that software makers and service providers continue to incorporate support for webhooks and other technologies of an open, connected web as they go.

My everyday apps

This may be one of those posts that no one other than me cares about, but I've been meaning to take a snapshot of the software tools I use on a daily basis right now, and how they're useful to me. (This comes at a time when I'm considering a major OS change in my life -- more on that later -- so as a part of that I need to inventory these tools anyway.)

So every day when I get up and stretch my arms in front of my energy inefficient, eye-strain-causing big blue wall screen with a cloud in the middle, grumbling to myself that we now call things "apps" instead of programs or software or really any other name, I see:

Airmail 3 - my main tool for reading, sorting, sending and finding email across a few different accounts. Replaced Thunderbird with it a few years back, really good choice.

Chrome - on the desktop, my daily browser of choice. I tried the new Firefox recently and liked it, but not enough to switch. I have lots of extensions installed but some favorites are Adblock Plus, JSONView and XML Tree, Momentum, Pinboard Tools, Postman, Signal Private Messenger, Vanilla Cookie Manager, Xdebug helper and Web Developer. On mobile I use iOS Safari.

macOS and iOS Calendar and Contacts - works with all of the various online calendars and address books I sync to, have stayed pretty intelligent and user-friendly without getting too cluttered.

Todoist - for task management and keeping my brain clear of things I care about but don't want to have to remember.

Slack - for communicating with my co-workers and others. I have accounts on 11 different Slack instances, and only stay logged in to about 4 of those at once.

Continue reading "My everyday apps"

WordCamp for Publishers Denver

I'm excited to be a part of the team that is organizing the first WordCamp for Publishers happening in Denver, Colorado this coming August 17th to 19th.

In my work on Automattic's VIP service offering I've been able to see some of the incredible things that journalists and publications of all types are doing with WordPress - really interesting stuff that pushes the software in new directions. So when I saw some of my colleagues discussing an event that would bring engineering, product and editorial teams together around all the things WordPress can do in the publishing world (for publications big and small), I wanted to be a part of making it happen.

We're looking for speakers and sponsors now. If you know someone who might want to be a part of this first-of-a-kind event, please point them in our direction. And if you're involved in publishing with WordPress at any scale, or just want to learn about how media organizations are using it to modernize their publishing  workflow, I hope you'll consider attending. Tickets ($40) will go on sale soon!

Creating a private website with WordPress

When we became parents in 2015, Kelly and I talked about where and how we wanted to share the initial photos and stories of that experience with a small group of our family and friends. In case you haven't noticed, I feel pretty strongly about the principle of owning our digital homes. So I felt resistance to throwing everything up on Facebook in hopes that we'd always be able to make their evolving privacy and sharing settings and policies work for us, while also trusting that every single Facebook friend would honor our wishes about re-sharing that information.

I took some time to explore tools available for creating a private website that would be relatively easy for our users to access, relatively easy to maintain, and still limited in how accessible the content would be to the wider world. (I tend to assume that all information connected to the Internet will eventually become public, so I try to avoid ever thinking in terms of absolute privacy when it comes to websites of any kind.)

I thought about using WordPress.com, which offers the ability to quickly create a site that is private and viewable only by invited users while maintaining full ownership and control of the content. I passed on this idea in part because it didn't allow quite the level of feature customization that I wanted, and partly because it's a service of my employer, Automattic. While I fully trust my colleagues to be careful and sensitive to semi-private info stored there, it felt a little strange to think of creating something a bit vulnerable and intended for a small group of people within that context. I would still highly recommend the WordPress.com option for anyone looking for a simple, free/low-cost solution to get started.

Here are the WordPress tools I ended up using, with a few notes on my customizations:

Basic WordPress Configuration

For the basic WordPress installation and configuration, I made the following setup choices:

  • I put the site on a private, dedicated server so that I had control over the management and maintenance of the site software (as opposed to a shared server where my content, files or database may be accessible to others).
  • I used a Let's Encrypt SSL certificate and forced all traffic to the SSL version of the site, to ensure all communication and access would be encrypted.
  • I set up a child theme of a default WordPress theme so I could add a few customizations that would survive future parent theme updates.
  • I set "Membership" so that "Anyone can register" in the role of Subscriber (see more below on why this is okay).
  • For Search Engine Visibility I set "Discourage search engines from indexing this site".
  • For discussion I set "

Continue reading "Creating a private website with WordPress"

Using Todoist to organize all the things

For just over two years now I've been using Todoist as my primary to-do list manager and personal organizer software. I pay for the upgraded Premium version at US$28.99/year. I really like it and it's helped me stay on top of all the things I want to get done in my professional life, personal life, local community and beyond.

(Before Todoist, I'd been using Taskpaper and loved the simplicity of its interface and file storage. The software hit a period of being unmaintained and I really needed something up to date, so I switched. Taskpaper has since seen new life as a project, it's worth checking it out again too.)

The Todoist website linked above already showcases many of its features so I won't bother repeating those, but here are a few of the things I especially appreciate:

Continue reading "Using Todoist to organize all the things"

Preparing to have my wallet stolen

This post is from the "random life-hacks department."

I don't like worrying about losing my wallet. I don't really carry anything of great significance in it...little or no cash, some ID, and a few credit cards. But in the past I also knew that if I lost it or if it was stollen, I'd spend some anxious time trying to remember exactly what was in it, and then even more time searching around for the right phone number to call to get things canceled and replaced.

And it felt like there were more important things to worry about.

Ever since I started using 1Password, I don't worry about this as much any more.

Continue reading "Preparing to have my wallet stolen"

Dispatches from my Internet of Things

A few years ago I noticed that a couple of different tools and services I was using at the time were offering the option to tweet when I engaged with them somehow. I was interested to try it out but I didn't want to clutter up my human-authored Twitter feed with a bunch of software-authored stuff that I couldn't necessarily control the timing or content of.

So, I created the @JCHThings Twitter account, and it's been a steady stream of activity from the Internet-connected devices and tools in my life ever since.

Sometimes it shares some bad news:

Continue reading "Dispatches from my Internet of Things"