Email

The folks at Front tell us that email will last forever.

I was thinking about how people complain that they can’t send links in Twitter direct messages. And I was thinking about a system where you could send links privately.

In the ideal situation that system would be owned by nobody and would be based on open standards. The message wouldn’t pass through any one specific service.

That system is email, of course. It’s a miracle.

Yet we hate it so much.

There’s no technical reason why that specific use case — message with link, no subject line, quick to find contact, quick to write and send — couldn’t be handled by email. The issue is user interface.

It continues to surprise me that email app vendors don’t think about the way people communicate now. They may think about the way people communicate by email but they don’t think about how people communicate in general.

Maybe it’s an issue of economics. If you write an email app your best case scenario is, apparently, to get acqui-hired and then discontinue the app.

* * *

Another huge problem with email is spam. I get a few hundred a day.

Think back to 2004. Bill Gates said that spam would be a thing of the past in two years. That would have been 2006 — eight years ago. It’s worse now than ever.

I get much less Twitter spam than I get email spam. But the downfall of any system where anybody can send a message to anybody is that anybody will send a message to anybody.

It makes me cranky.

NickH and NetNewsWire

Nick Harris, my co-worker at NewsGator and Sepia Labs, writes NetNewsWire – Time to Breakup:

I need read state sync and a multiple device experience. I want to be able to click links to blog posts I see on Twitter and have them marked read in my RSS app. Really I want to spend less time dealing with read state — something I was spoiled with at NewsGator.

It’s not easy. It’s an incredibly difficult problem to solve. But it is solvable. The problem is if the solution is profitable.

BASIC Was Cool

James Hague, Lessons from 8-Bit BASIC:

There’s a small detail that I skipped over: entering a multi-line program on a computer in a department store. Without starting an external editor. Without creating a file to be later loaded into the BASIC interpreter (which wasn’t possible without a floppy drive).

Here's the secret. Take any line of statements that would normally get executed after pressing return:

PLOT 0,0:DRAWTO 39,0

and prefix it with a number:

10 PLOT 0,0:DRAWTO 39,0

The same commands, the same editing keys, and yet it’s entirely different. It adds the line to the current program as line number 10. Or if line 10 already exists, it replaces it.

I feel a little bad for all the people who didn’t learn to program this way. It was so much fun.

Swift, Strings, and Memory Use Question

Do I understand this correctly? NSString (usually) stores strings in memory as UTF-16, while Swift uses UTF-8. Correct?

If so, then I think this means that for text-heavy apps — such as Vesper, such as RSS readers and Twitter clients and similar — there could be a nice reduction in memory use due to this change. (If they use Swift strings.)

True?

Update a few minutes later: Alastair Houghton tweets:

No. NSString stores strings as 8-bit ASCII where possible. It only uses UTF-16 if it has to

Update 8:40 pm: Well, maybe. Sometimes. Consider the many European languages that have accented characters — those aren’t ASCII characters, which means NSString would use UTF-16. But if Swift uses UTF-8 in those cases, then it should save memory, since not all (or even most) of the characters are accented.

Or, as Alastair Houghton tweeted when I asked him about this case:

Yes, e.g. in European languages (French, German, Spanish etc) where many chars are ASCII but some are accented.

(Also remember that words with accented characters — résumé, éclair — appear in written American English too.)

The Atlantic on Twitter

In April of this year, The Atlantic published A Eulogy for Twitter:

People are still using Twitter, but they’re not hanging out there.

(Via The New Atlantis.)

[Sponsor] thoughtbot

We’re looking for an iOS developer to join our team in New York City. You’ll work with top-notch iOS developers from around the world, building great products that customers love to use.

We focus on high quality work and the importance of learning, so that we can hone our skills and expand our knowledge. We’re even already doing some projects in Swift.

We produce the iOS development podcast Build Phase, and our blog Giant Robots Smashing into other Giant Robots.

Learn more on our website and contact us at jobs@thoughtbot.com.

Stimulus Program

Dave Winer, How to stimulate the open web:

If I create a tool that’s good at posting content to Facebook and Twitter, it should also post to RSS feeds, which exist outside the context of any corporation. Now other generous and innovative people can build systems that work differently from Facebook and Twitter, using these feeds as the basis, and the investors will have another pile of technology they can monetize.

Waffle on Open Standards

The New Old World:

…more than anything Twitter and Facebook needs to get some competition from something that’s as approachable as Twitter and Facebook and has a clear road ahead to being an open standard.

There are many people clamoring for open standards. I don’t want them because I love open standards, I want them because they are brilliant means to an end.

297,897 Social Media Gurus

B.L. Ochman, writing in June of this year: Twitter bios show epic growth — to 297,897 — of self-proclaimed social media gurus.

The list now tops 297,897 — up from a mere 16,000 when we first started tracking them in 2009.

By January, 2013, the count has swelled to 181,000, causing us to note that social media experts were multiplying like rabbits.

(Via Jamie Zawinski.)

Functions Returning Functions

Justin Driscoll, First Class Functions and Delayed Evaluation in Swift:

This concept of “functions as data” enables the development of complex systems composed of small bits of reusable logic in an elegant and concise way.

Gabe on Truncated Feeds

Macdrifter:

From the perspective of someone writing on the Internet, it’s so incredibly difficult to get someone to care about what I think, I can’t imagine making them work for it. It’s such a huge privilege to have anyone contemplate my words, that I feel obliged to roll out the welcome mat.

On Taking Breaks

Marco was recently in a fight on the internet. I missed it and don’t know what it was about. I have no interest in being a spectator in these kinds of things — and if they were to happen to me (they don’t) I’d stop using my Twitter account.

Because that’s the thing — though it may have been started by a blog post, it all happens on Twitter.

Even though I follow people I like and respect, there’s no way around seeing some of the crap that happens on Twitter. Even if you don’t use Twitter at all, you will have seen articles about people being harrassed and threatened. You will have noticed the pure toxic sludge that pours through the service. (A hypothetical “Dawn of the Idiocracy” prequel would feature Twitter prominently.)

And it’s worse than any blog comments system, because if you use it, anybody can put something in front of your face whether you want it or not.

Twitter is also wonderful, and I get so much value out of it. But it’s like 51% good and 49% bad.

I don’t see it getting any better. Hopefully it can hold the line at just-barely-worth-it. (But the recent changes to the timeline make that a little less likely.)

So here’s what I do: I think of Twitter as part of my workplace. When I’m done for the night, my iPhone and laptop stay in my office. I’ll often pick up my iPad and do some reading — but there are no Twitter apps on my iPad and I don’t go to twitter.com on my iPad.

Some other things: Sheila and I eat all our meals together, but we don’t take out our phones while eating. We don’t take out our phones while going for our nightly walk.

In other words: if we’re hanging out, we’re hanging out with each other rather than with ourselves and the entire Twitter world. That world can go away for a while — it’ll still be there later, and it will still be the same stuff it is every single day.

Twitter is addicting in the same way slot machines are. You get small bits of pleasure at random intervals, and it doesn’t really change. So you keep pulling the lever or pushing the button.

And it’s cheap, too — 140 characters can’t compare to the grown-up pleasure of a good conversation with a real person.

So I just leave it alone more. And I’m fine. Better than fine.

Greg on CloudKit

Greg Pierce explains why Drafts 4 will use CloudKit:

Why am I willing to make these trade-offs for CloudKit, despite its limitations? Because, ultimately, developer perspectives aside, I felt it was the right choice for my customers.

Tim on CloudKit

Tim Schmitz, Web Services, Dependencies, and CloudKit:

That got me thinking about how CloudKit fits into this picture. As an iOS developer, CloudKit is immensely appealing at first glance. The API is low-level enough that you have a good deal of control over how your app interacts with the server. At the same time, you get a lot of server-side functionality for free, which leaves you with more time to focus on building a great app. But I still have a lot of misgivings about it.

I’ve written about CloudKit before. It seems well-designed, and I suspect (though not based on experience yet) that it’s better-executed than earlier broadly-similar services from Apple.

We would have been tempted to use it with Vesper — but it would have meant, for instance, that we couldn’t do a web app version of Vesper.

One Indie Developer’s Tale

Gabriel Hauber:

Call me mad. Call me whatever you want :)

Great story.

A Vesper Performance Enhancement

Vesper 2.003 came out earlier this week — and it includes a syncing performance enhancement which I thought I’d write up.

Performance enhancements aren’t always as straightforward as the one I’m about to describe. Often they require the hard work of revising the data model, adding caching, or doing your drawing the old-fashioned way (as opposed to just setting a property on a layer, for instance).

This one happens to be easy to write about, so I will.

But first I’ll say that Vesper is already fast and gets plenty of praise for its performance. I’m a speed freak with zero patience — except for the considerable patience required to make sure my software works for people like me.

So this performance enhancement isn’t something that any current users are likely to notice, but it will become important in the future as people create more and more notes.

* * *

Here’s what we noticed: the initial sync on a new device, with a large number of notes (more than almost anybody has; more than I have), seemed unexpectedly slow.

My first thought was that the server was having trouble handling this. It wasn’t — it was returning all the data quite quickly with no complaint. And the amount of data was around the same as a typical image file. A lot, sure, but not an insane amount for a first sync.

So I ruled out the server, networking, and JSON translation as issues. Next I did some poor-man’s profiling — I hit the pause button in the debugger a few times as the app was syncing.

And the same function always appeared: getUniqueID. It’s a little C function that calls SecRandomCopyBytes to generate a random unique ID for a note (VSNote object).

The answer was clear: that function either needs to get faster, or we need to not call it so often. Or both.

Not Call It So Often

The syncing system creates a VSNote object for each JSON note pulled from the server that does not exist locally. On first sync, that’s every single note.

The problem: VSNote’s init method generates a unique ID by calling getUniqueID. This is superfluous in the case of notes coming from the server — those notes already have a unique ID.

So I did the obvious thing: I created an -initWithUniqueID: method that allows the creator to specify a unique ID, which means I could avoid all those calls to getUniqueID.

Awesome. Problem solved. Done.

I could have stopped there, but I didn’t.

Make the Function Faster

It still bothered me that that function was so slow. It didn’t really matter, at this point. But why would SecRandomCopyBytes be so slow? Something like that could be a little slower than some other system APIs, but still the numbers I was getting seemed weirdly super slow.

So I did a straightforward timing test, and SecRandomCopyBytes itself is plenty fast enough. What gives?

I thought it might be the collision check. There’s an NSMutableSet of all note unique IDs, and we check the returned value of getUniqueID to make sure it’s not in that set. Profiling told me that that’s not the slowdown. (As expected, since the collision check happens in getUniqueID’s caller, not in the function itself.)

What I found was that it was the limits on unique IDs that were causing the problem.

The limits are this: it has to be a positive 53-bit integer (instead of 64-bit), and the integer has to be greater than the constant VSTutorialNoteMaxID. (Which is 100.)

The body of getUniqueID is actually a loop. It calls SecRandomCopyBytes repeatedly until it gets a uniqueID that fits within the limits.

I had thought, naively, that it would typically take one to three calls to get a suitable uniqueID — but I was wrong. Ten passes through the loop wasn’t that unusual, and it could be more.

The solution here was pretty simple: if the number is outside the range, use some arithmetic to get it inside the range.

If it’s negative, subtract it from zero to make it positive.

If it’s greater than the 53-bit limit, divide in half. (In a loop until it fits within the limit.)

If it’s less than VSTutorialNoteMaxID, add VSTutorialNoteMaxID.

This made it so that getting a unique ID that fits within the limits takes exactly one call to SecRandomCopyBytes instead of potentially many calls.

There is still the possibility of collision with an existing ID, but that would be so rare (most likely never), and the consequences are just a second call to SecRandomCopyBytes, so I didn’t worry about that.

But, again — most performance issues I run into don’t have nice straightforward solutions like this one. When they do, I don’t mind. It used to be that I’d beat myself up for not doing this better the first time, but these days I don’t. I’m just glad that I learned something and made the software better.

Tuples All the Way Down

David Owens suggests using named tuples instead of structs in some cases.

(“It’s tuples all the way down” is a new joke to iOS and Mac programmers. I know you’re groaning, but let us enjoy it and feel clever for a few minutes.)

Matt On Swift and iOS 8 Evolution

Old pal Matt Neuburg is interviewed on MacVoices.

There’s a video version and an audio version — I’m playing the audio version in the background right now.

Matt’s a dynamo. He should be on podcasts more often.

Sponsorships Available

We’re still running a summer sale for inessential.com sponsorships — $500 for a week ($400 for each of two or more weeks), which is about one-third off the normal price.

If you’d like to support this blog and talk about your thing (app, conference, service) to inessential.com readers — who are, to a person, successful, intelligent, and curious — then get in touch with me.

The week starting Sept. 1 (this Monday) is available — as are later weeks. See the Sponsorship page for more info.

Web Services and Dependencies

Tim Schmitz asked me on Twitter:

Curious on your take on dev services re: social network post. Should devs avoid svcs like Azure because an app may outlast it?

(He’s referring to my post from yesterday.)

You can’t escape dependencies — even if you’re running Linux, Apache, MySQL, and PHP on a virtual machine — and so you need to evaluate everything.

Some questions to ask:

How long will this service be around? How difficult would it be to move? How much of this service’s unique features do I use? How much benefit do I get from those?

This extends to software, too. What is a given package’s reputation for security? Is it likely to be maintained in the future? Will upgrading to get a security fix also mean revising some of my code?

You have to plan for scale. Will this service and and those software packages allow room for growth? (Sharding, running multiple instances, etc.)

And you have to balance developer time. The point is to do less housekeeping and more bug fixes and features.

With Vesper we chose Azure Mobile Services on the grounds that it’s likely to be around a very long time and it’s based on Node (which is well-supported and runs in many places). The folks there were extremely helpful as we were making our decision, and that helped us decide. (You want to go where you’re wanted, for one thing.)

That said, we still have contingency plans, because anything could happen. There are no cases where you wouldn’t want to plan to be able to move. (In fact, we have two: one for moving from Mobile Services to another Node provider and one for moving to another service running Sinatra, in case we have reason to get off Node.)

Our contingency plans aren’t specified to the smallest detail, but that wouldn’t be more than a day of work, which is acceptable. (One reason not to get too detailed: the options will look different in six months, one year, three years, etc.)

I have no expectation that we’ll ever need to move. Azure is a big bet for Microsoft (the new CEO comes from the Azure group). We’ve found that the system performs wonderfully (we get praise for efficient syncing) and there’s a ton of room for growth — we’ve barely scratched the surface so far. (We run with just one instance, and that’s well more than enough.) We’re entirely happy with our choice.

That’s what works for us. But every app and every developer is unique, and there’s no way out of evaluating all the dependencies and making the best decision. I don’t think there are easy answers — it takes diligent research and thinking.

Archive