Category: Stuff I Wrote

How Do We Harness Our Idle Mobile Consumption?

I’ve got this nagging thought. More of a question really. The kind of thing that keeps you up at night.

Let’s rewind a couple weeks, just before I uninstalled FB from my phone. I started noticing my typical usage pattern of the app – I found myself skimming content, finding nothing of interest, refreshing, not seeing anything new, refreshing again. How often did I do it? I didn’t keep a count or anything, I just kind of became aware of it, like the lights coming up slowly in a theater. But it was a lot. So much so that after I uninstalled the app, I found myself pulling out my phone instinctively, unlocking it, and wondering what I was doing and why. (Spoiler alert: after uninstalling the FB app, my phone spends about 50% less time in my hand. Try it for a week.)

I keep coming back, though, to the usage pattern – skim, refresh, nothing, refresh. Looking back, I did that a lot. Paying closer attention to how I use Twitter, I do it some, but less than I did on FB.

What am I looking for?

Ultimately, I don’t think I’m looking for anything. I think I’ve got a few seconds – not much more than that – and I want to fill it with something. That something, in this case, was scratching at the itch of data addiction. For some people it’s taking a turn in a casual game. What I’m not able to dig into effectively is how much idle time we collectively spend there. It’s not the same data set as the mobile casual games market, but there’s some overlap there. In lieu of actual research papers in the area (but do share those if you have them) – how many of you find yourself holding your phone in your hand, looking for something to do with it? How often?

So I see these few wasted seconds, and I multiply them by N, and it looks like a big pile of wasted seconds, and I wonder how we can put that time to better use. Projects like Galaxy Zoo did a great job of this for the desktop, and they have built a number of projects around crowdsourcing data sorting that look great – and maybe that’s what I’m after.

Maybe.

What I really want to do is harness that time for open source projects. I just can’t work out how. Mobile is a poor form factor for writing code or documentation. Anything text intensive falls outside the parameters of spending a few idle seconds on something. But I’m not ready to take “It’s not possible” for an answer.

Looking at what the mobile form factor is good for, I want something I can do with my thumb. A few use cases spring to mind.

Triage Bugs – A bug has been filed. I see the text of the bug and get asked a Yes or No question that helps to move it along the pipe. An example might be as simple as “Is this a bug?” – it may be chaff or spam or a test message. Other examples might be “Do we need to ask for a screenshot?”, “Is the version info included?”, “Is this correctly categorized?” Every Yes answer helps move the bug along, every No answer could prompt the filer for more info, or allow the user to take a subsequent action (Add a quick note, categorize, etc).

Pull Request Micro-review – Doing a full review on a pull request on a phone sounds painful. Is there a way to ask simple questions of a pull request that would be useful? “Does this conform to style guidelines?”, “Are there tests included?”, “Is the code well documented?”, or maybe even “Do you like this Pull Request?”

Stack Ranking – Given a pair of bugs or features, which one do you think is the higher priority?

Rendering Validation – This is very niche, but if the project performed some form of graphics manipulation, this would be a great space to feed in test data (images) and validate that the output matched expectations.

Any and all of these have potential downsides, but rather than looking at why they would never work, I’d love to hear what could make them useful. Or what else might be useful. Or any thoughts about what people have tried that worked and didn’t work. Or really just anything.

Any thoughts?

A11YLint Brackets Extension – My Attempt At Realtime Accessibility Improvement

I do all my coding in Brackets – the open source HTML, CSS and JavaScript editor project started by Adobe.

One of the things I really like about Brackets is this integration they’ve done with JSLint – a tool that looks at your JavaScript while you’re writing it and tells you when you’re doing something you probably shouldn’t. JSLint can be a little over-strict sometimes, but using it has had the real benefit of forcing me to write cleaner and more consistent code.

I’ve had an idea kicking around for a little bit that plugging a linter for accessibility into Brackets might help me do the same thing when I’m writing HTML (which I don’t do that often). So this past weekend I sat down after talking the idea through with a couple people and banged something out.

The A11YLint Brackets Extension is available on Github, is MIT licensed, and may or may not be doing all the right things. I’d love it if you checked it out and gave me some feedback. I know it’s incomplete, but it’s a start.

Want to help, but don’t know where to start? Write me some failing tests – create an HTML page that fails a rule not currently covered by the A11YLint Brackets Extension and submit it. Or open a bug and describe the test.

Also – I don’t know what your development process is like, but while I was working on this project I decided to shoot some video along the way, and what came out the other end is this.

Questions, comments, etc – I’d love feedback on this.

Writing Accessibility Into A Design Application

Note: This might be an especially good post to remind people that I’m blogging on my own, and anything I say here isn’t on behalf of my employer.

I pushed out an update today to the Edge Inspect Chrome Extension that mainly included a couple of localization tweaks and adding accessibility tags to the extension.  Given that Edge Inspect is primarily a tool for designers, we had some discussion about if this was even necessary.  I don’t know if I have anything new to say on the subject, but I thought it was worth writing about.

I’ll preface by saying that I’m not the guy to talk to about accessibility and your content.  I’ve never given it much thought before, and I’m pretty sure what you’re reading right now doesn’t give you much in the way of accessibility features unless it came baked into the theme already.  In fact when I typed that I had to go back and set the title of that link I posted earlier because I skipped that bit.  In short, I’m probably part of the problem in the first place.  I know that accessibility doesn’t mean “usable by the blind” but I’m probably not getting even that right.

So when the question of adding accessibility tags to Edge Inspect came up, I was one of the first to say “It’s a visual design tool.  It’s not something you can use if you’re blind.”  The discussion went back and forth, and ultimately we decided to go ahead and do the work.  Partially on the strength of our product owner, who felt strongly that we should do it as a matter of principle, and partially because the company has a dedicated team of people willing to do a lot of the work for us.  Most of the updates to the extension weren’t done by anyone on the team.  It’s hard to say no to free work, especially when it’s really well done.

But the conversation in general forced me to challenge a fundamental assumption I’d made about the product, and in a way that really clicked with changes I’ve been trying to make in how I approach everything.  Like most of us I guess, I used to think I could get to the right solution if I thought things through hard enough.  I’ve come around to the notion that really the only way to get to a good solution is to test an assumption.  In other words there is either stuff I’ve validated, or stuff I haven’t finished testing yet.

And in this context, I’d assumed that Edge Inspect wouldn’t be used by the blind, and never bothered to test that assumption. Further, we’d kinda made it impossible to test the assumption by not including accessibility in “those things we do along the way rather than bolting them on at the end.”

So I guess there were two lessons really.  The first is that we were a little backwards on our thinking.  We were saying “It’s a visual design tool, therefore no blind users” when we probably should have been saying “Users could be blind, how can we make this design tool useful to them.”  And the second is if we’d been thinking that way all along, we wouldn’t have needed to try and bolt the work on at the end.

Will it make any difference to our users, downloads, market impact, adoption rate or revenue?  I dunno.  But I’m pretty sure if we don’t test things like this, we’d never really know for sure.

Note: Edited to add a link to the accessibility team who did the work for us. If you’d like to learn more about the company’s accessibility team, you can read more about them at http://www.adobe.com/accessibility