Something I really appreciate1 when I see it manifested in an engineering lead is the habit of letting subordinates’ arguments frequently win the day during run-of-the-mill code reviews, even when the lead remains skeptical or — especially — is in sharp disagreement with the proposed changes. Disagreeing but permitting isn’t a sign of weakness. In the context of a healthy team dynamic, it’s a sign of good leadership.
There are a finite number of Bad Idea Rejection Tokens™ that a lead can cash in before their teammates conclude that the team lead has a closed mind and a bad attitude. “I wouldn’t have done it that way” is not something an engineering lead should find themselves saying often during code review. Making a habit of rejecting your teammates’ work is toxic for morale and productivity. If instead the lead only occasionally exercises their veto powers, then the teammates can trust that when the lead rejects their work it isn’t motivated by stubbornness but by a good-faith effort to practice good judgment. Finding the right balance between permissiveness and restraint is key.
Welcome Home (Pod), a Very Short Play About Apple’s Inexcusable Failure to Recognize Even Mildly Disfluent Speech
Hey Siri, play, uhh—
(Siri light turns on)
IPHONE, IPAD, APPLE WATCH
(Siri reacting on all of them)
(still thinking, all those devices reacting doesn’t help)
(resumes playback of the previous song from an hour earlier)
IPHONE, IPAD, APPLE WATCH
(go dark again)
(irritated, but finally remembering to use the key phrase)
IPHONE, IPAD, APPLE WATCH
The front door opens as HENRY, four-and-a-half years old, comes in the door after a weekend at his grandmother’s.
(notices the Home Pod)
It’s a Home Pod.
Home Pod? What’s it do?
It has Siri on it, it plays music.
Henry is excited. He’s thinking of a song in his dad’s “Henry” playlist called “Drift feat. RZA” that he’s been listening to at least twice a day for the past few weeks. He leans his face towards the top of the Home Pod.
The Home Pod does not respond.
No, you have to say “Hey Siri”
IPHONE, IPAD, APPLE WATCH
Hey Siri! Play Drift!
Okay, playing Drift by Joshua Lee.
(wrong song starts playing)
No! Siri, play Drift from Pacific Rim!
(unresponsive, continues playing wrong song)
No, you have to say
(quietly this time)
(lights up, ducking the audio)
(with some disfluency typical of his age)
Play Pacifi— play Drift from Pacific Rim.
OK, here’s some Dr. Dre just for you.
(Dr. Dre starts)
Hey Siri! Play the Drift song from Pacific Rim.
I couldn’t find “Fred’s F****g Pussy” in your library or on Apple Music.
(Dr. Dre resumes)
Siri! Play the Drift song from Pacific— from Pacific Rim.
(Dr. Dre continues, unabated)
(loudly, over the music)
You’ve gotta say “Hey Siri”
IPHONE, IPAD, APPLE WATCH
(lights up, ducking the audio)
Siri! Play Drift from Pacific Rim
I couldn’t find Drake Pacific Rim in your library or on Apple Music. (Dr. Dre continues)
Hey Siri, stop playing.
Hey Siri, play Drift from Pacific Rim
I couldn’t find Drift From Pacific Rim in your library or on Apple Music.
Hey Siri, play the song Drift from the Pacific Rim soundtrack.
OK, now playing “Drift featuring RZA” by Blake Perlman and RZA. (Correct song starts)
Keep Screen Unlocked is a handy (optional) feature in ’sodes that keeps play controls accessible at all times. It isn’t a technical marvel by any stretch, but it comes in handy if you:
- Go on long trips or commutes
- Don’t have or can’t use Bluetooth in your car (or don’t have easily-reached Bluetooth controls)
- Don’t have CarPlay, either
- Put your phone in a dashboard mount
- Use a car charger for your phone
It’s not a universal use case, but it’s one that I’ve been in. Once you’ve enabled the feature in Settings.app, all that’s required to trigger the feature is for these three conditions to be met:
- ’sodes is running in the foreground
- ’sodes is playing (not paused)
- Your phone is connected to a charger
Why? Because this keeps ’sodes play controls accessible in a hurry. Without this, your phone’s display will eventually go dark and your screen will lock, requiring you to manipulate your phone before you can tap a pause button, if you can even tap it in iOS’ tiny Now Playing lock screen widget. That latter irrituation is why ’sodes play/pause button is so generously-sized. You can tap it easily even with your arm extended as your car bounces down the highway.
This setting is off by default. And even if you enable it, it will not engage unless all three conditions are met. Otherwise your normal locking behavior will take over.
If you (or someone you like a whole lot) would enjoy a more relaxing way to get into podcasts, please check out ’sodes. It launches February 15th, but is available for pre-order now.
One of the things you’ll notice first when using ‘sodes is that there are no downloads to manage. This isn’t one of those things where I ran out time. It’s a deliberate omission. This post is about why I made that decision, and also takes a quick look under the hood at the technical stuff that makes it work.
I have long wanted a podcast player that feels more like a TV-streaming app: tap a show, tap an episode, and listen. I know it’s possible to use other podcast players that way, but it’s tedious. You subscribe to a show and episodes start downloading. Eventually you have to go find where the settings are to turn off automatic downloads, but first you have to find all the active downloads and cancel them. Maybe you aren’t able to cancel them in time. Since those audio files count against your iPhone storage, you hunt down all the pesky downloads and delete them. Then when you finally start listening to an episode, you have to confirm it’s not re-downloaded.
For a lot of folks, all this stuff is part of the appeal. For me, managing downloads is a vestigial trait from when podcast players looked like this:
I made ’sodes because I think the experience should be better for casual users like me. There are no download queues to manage, no auto-deletion behaviors to configure, no inboxes to triage. You tap an episode and it plays. It requires an internet connection, yes, but so do video streaming apps1. My use case for podcast listening mirrors how I binge watch pleasantly-shitty Netflix shows about crusty-but-benign police lieutenants and ex-Supreme Court justices and managing editors.
Because ’sodes doesn’t have any of the baggage of an old-school podcast player, the design is airy and streamlined. There’s no need for a tab bar or a navigation bar. There are no inboxes or queues. Many times there’s not even a need to leave the app’s home screen. Your most favorite shows and a handful of recent episodes are already there:
Behind the scenes, playing a podcast audio file is different than streaming a TV show. Services like Netflix use technologies like HTTP Live Streaming (HLS) to serve video on-demand. HLS plays video at variable levels of quality so playback can be adjusted for slow connections or metered usage. HLS requires the content provider to encode the video in a range of qualities, fragmented into hundreds of small files. Podcast producers don’t do that. Podcasts are encoded as one-size-fits-all MP3 files. “Streaming” a podcast is not really streaming but downloading on demand.
So how does ’sodes manage on-demand downloads? Because on-demand playback of a podcast is only able to use the original quality MP3, I want to keep network usage to a minimum. Users should feel comfortable using ’sodes on a cellular network. I limit network usage via an obscure but powerful API that Apple provides called AVAssetResourceLoaderDelegate (if you want to read an even deeper dive click here). This API allows the developer to take over control of fulfilling all the audio data requests coming from the system audio framework. Whenever the system requests a range of audio data for an episode, I first check a local file to see if I have any of those byte ranges:
If I have the data already, I use it. If there are any gaps, I only download the data it takes to fill in the gaps. I never download any data that’s not explicitly requested by the system audio framework. In this manner, you can take several app sessions to finish listening to an episode, but ’sodes won’t download any given byte of data more than once. This drastically reduces network usage without requiring you to manage a download. Further, this cached data is stored in a location on your iPhone that is managed automatically by the operating system and doesn’t count against your iPhone storage. ’sodes keeps audio data for a handful (currently 3) of the most recently-played episodes so that you can switch between a few episodes when you have a spare moment without triggering more data usage.
If you (or someone you like a whole lot) would enjoy a more relaxing way to get into podcasts, please check out ’sodes. It launches February 15th, but is available for pre-order now.
Yeah, I know Netflix (and maybe others) have started adding offline modes, but those are buried inside secondary screens and are not allowed to get in the way of the primary casual experience. ↩︎
This might be the most delightful Apple product I’ve ever purchased. It feels like an inflection point in the story arc of consumer devices. The addition of cellular isn’t iterative. It’s revolutionary. In other words:
This is my second Apple Watch. I bought a Series 3 with cellular in Space Black with the black Milanese loop. I previously owned an original Apple Watch in stainless steel which I pre-ordered at launch and have been wearing daily ever since. I genuinely missed my original watch if I forgot to wear it, but I didn’t love it. I used it for notifications and for some fitness tracking.
OMG cellular. The addition of cellular connectivity is life-changing. It works as advertised. I can take phone calls (including FaceTime Audio calls), listen to voice mail, send and receive text messages (both SMS and iMessage), check email, update my grocery list, all from only the watch. I’ve done this indoors and outdoors, in a third-floor apartment, on a YMCA soccer field, rolling down a highway, inside my son’s school, strolling around a store, at the gym, waiting at baggage claim. It just works.
Going phoneless. I was born in 1981, so I’ve got one foot on either side of the tech revolutions of the ‘90s and '00s. As far as I can recall, this is the first time since I first got a cellphone (let alone a smartphone) that I am deliberately leaving the house without any device in my pocket. It’s a refreshing feeling. I took a four-mile walk for exercise, drove an hour to my parents’ and back to pick up the kid, and picked up my wife from the airport. Apple Watch with cellular supports a critical slice of the features a smartphone provides, which means I get to enjoy best of both the old and new worlds: I am free from the temptation to waste quiet moments on social media and soul-crushing national news, but not at the expense of missing out on texts and phone calls from friends and family, or getting directions home, or triaging the occasional urgent email. This newfound flexibility is, simply put, mind-blowing.
Battery life is still awesome. To put it in context, I woke up on Sunday at 9 and put on my watch, fully charged. I wore it all day, including an hour-and-a-half walk in the afternoon listening to music. Besides a twenty-minute charge while I showered after that walk, I wore the watch all day until eleven at night when I brought my wife home from the airport. It was still at 25% battery sitting in baggage claim waiting on her delayed flight to arrive, listening to music on my AirPods.
AirPods are Apple Watch’s best-buds. Don’t get an Apple Watch without AirPods. They’re so complementary they should probably ship in the same box.
Feeds-n-speeds. The leap in performance is staggering, compared to the original watch. Inactive apps wake up and are usable instantly. Cold-launched apps are ready in a second or less. Within a given app, screens of content push and pop in response to taps fluidly. It’s possible to jump from adjusting something in Music to responding to a text in Messages and back again without missing a beat. This is especially handy when grocery shopping, when I’m toggling between AnyList and Messages.
Siri’s new face. I’ve been using the new Siri watch face and have been pleasantly surprised by it. It’s the only face that offers a dedicated Siri button (complication) for launching into a Siri request. In combination with the marked speed improvements, it’s the fastest way to open apps, especially if they’re not in the list of recently-used apps in the dock.
Sometimes, when leaving the house, it takes up to a minute before the watch recognizes that neither an iPhone tether nor a WiFi connection is going to become available and a direct cellular should be used instead.
I was a little deflated to learn that Apple Music streaming on Apple Watch isn’t coming until sometime in October. Until then you still have to rely on manually syncing playlists to the watch, which only transfer while the watch is connected to a charging cable.
I am tortured by the lack of podcast streaming. Taking a long walk listening to a good podcast seems like such a natural fit, but there’s no first-party app for this. Third party developers are anxious to fill the gap, but the public APIs just aren’t there yet.
A bunch of third-party apps are going to need major overhauls to be usable without a phone. No longer can they rely on an iPhone companion app to provide cold-launch access to user data and credentials, or to make network requests.
So far, the only app I use regularly that works on cellular is the superb AnyList shared grocery list app.1 The apps I miss most: Slack, Twitter, and Tweetbot.
I was mistaken about AnyList. Here’s a quote from Jason Marr, one of the developers of AnyList: “The AnyList app for Apple Watch syncs directly with the AnyList iPhone app and stores data locally on the watch so it can be used to view and modify lists even when the watch is not connected to the phone. However, the watch app does not currently support syncing over LTE, so modifications to a list that occur while the watch is disconnected from the phone will not sync to / from the watch until the next time the watch and phone are connected." ↩︎
When facing an anxiety-provoking deadline for a software project, you have more time to plan your architecture than it may seem. Indeed, you should consider near and medium term requirements and risks to the full extent that it is possible to consider them given current knowledge, even if you choose not address any of them up front. Take only calculated risks. Factor those risks carefully into your initial implementation. Do not touch a keyboard until you have done so. Cut corners, but cut them thoughtfully.
Urgency Versus Anxiety
It’s worth noting the important difference between a sense of urgency and anxiety. Before I got into software development I was a registered nurse in an ICU. One evening a patient went into cardiac arrest. In an instant, the room filled with nurses and other folks eager to jump in and help. I was leaning over the patient’s bed giving chest compressions to keep the patient’s blood flowing. I felt myself swarmed by a small crowd in scrubs and Crocs. There were more people present than necessary, and it made the atmosphere in the room ratchet up from an appropriate urgency to a palpable anxiety. A supervising physician on the scene wisely ordered everyone not currently providing care to leave the room. As the excess folks filed out, I overheard the physician mention something to a colleague about the dangerous anxiety he was correcting:
I’ll never forget something an instructor told me in med school about situations like this, “You always have more time than you think you do.”
He wasn’t addressing me directly, but the lesson stuck: there will never be a medical situation so dire that you literally cannot spare a moment to consider an appropriate course of action. There’s no use for anxiety in the mind of a professional doing his or her duty in a crisis. March all the unnecessary anxious thoughts out of your mind and make room for a deliberate response. Give yourself permission to think. In the years since that day, I’ve found this lesson to be very valuable, even outside of healthcare. Strange as it may seem, I hear echoes of it in my process for sketching out architectural roadmaps for the applications I work on.
In an ideal world, agile processes are adhered to with perpetual regularity, pulsing in a cadence of small, iterative changes. In the real world, an organization that can unwaveringly adhere to an agile process is hard to come by. Customer demands, public events, and other factors create constraints that require setting a fixed ship date for a product launch. This is lethal to an agile process because there’s no margin of error for iteration. You don’t have the luxury of repeated revisions. You barely have time to ship your first draft. Under these conditions, the anxiety of the engineers on such a project skyrockets. Facing a tall list of requirements and a fast-approaching, narrow delivery window, there is a temptation to bust out the keyboards and hammer out some code because how will we ever finish unless we can show immediate and significant progress oh god oh god. Invariably, code written in thoughtless haste is unmaintainable or, worse, unshippable. Technical debt is accumulated at an unacceptable rate. Inappropriate patterns are chosen and implemented haphazardly.
Breaking it Down
It is difficult to break a down a set of large problems into atomic problem units which can be distributed among a team of developers and solved in parallel. In a healthy agile process, there is no single delivery date, but an ongoing process of experimentation and refinement. Impedance mismatches between the output of developers working on separate components are addressed through repeated course corrections. You fully roll out a feature only when it’s ready to be. But when there’s an aggressive and fixed delivery date, there’s no room in the process for such refinements. Each component has to be shippable in its first iteration, and it has to immediately lock into place alongside all the other components.
Under the pressure of a looming deadline, developers may spend an inadequate amount of time considering their architectural roadmap. At worst, this leads to a code base that fails to satisfy the launch-day product requirements on time. At best, the code produced is ill-suited for the life of the product immediately after launch. There’s no agile process in place to carry it through future milestones, so the cycle of fixed delivery deadlines and frantic architectural changes repeats until the product fails.
Here’s a metaphor for the problem. Consider an illustrator tasked with drawing a human figure. A trained illustrator works like this:
She begins with gesture lines and primitive shapes, blocking out the pose, proportions, and perspective. Progressive levels of detail are added, guided by those initial lines and shapes, until the drawing arrives at its intended appearance. Inexperienced artists try to begin at the end, drawing body contours without the aid of any primitive elements, or they hastily jot down the gesture lines and shapes without regard for proportion and perspective. Either way the result is unsatisfactory.
Carrying the metaphor, what I have seen anxious developers do is start with the far right drawing without any gesture lines. They task team members with drawing each limb separately and at a premature level of detail. When at last the team attempts to pin the components together the perspectives don’t match, the proportions are childish, and the result is hideously unusable. The irony is that — just as a rough pass of detail over an expertly-arranged set of gesture lines can yield a pleasantly unfinished portrait — a simple overlay of features and polish atop an expertly-ordered primitive architecture is the very definition of a minimally-viable product.
There’s another software development pitfall suggested by this metaphor. Accurate and pleasant gesture lines are extraordinarily difficult to master. They may look like stick figures to an untrained eye, but they’re anything but. Countless hours of practice and studious observation are required to become proficient at drawing these primitive shapes. If you undertake them without care, the resulting drawing will have all the same flaws as a drawing made without any gesture lines. In the same way, an architectural roadmap must be considered with extreme care. Don’t just list everything you know, list everything you don’t or can’t know. You don’t have to plan every detail, but you must wrestle with the problem area long enough to be reasonably confident that your architecture will be both efficient in the short term and stable for the medium term. If you’re lucky it will be stable for the long term. No matter what you choose, it’ll always be a guess. But make it a well-educated guess.
A Concrete Example
Here’s a concrete example of the kind of discussion I think can be spared some time at the beginning of a project without making commitments that over- or under- engineer things. Consider an app backed by a web service with user-specific accounts. Questions that might come up during a planning phase:
- How likely do we think it is that the app will ever need to support more than one account at a time?
- If we choose not to leave space for multiple accounts in our architecture, how disruptive would it be if multiple accounts suddenly became a requirement?
- How much additional up-front effort would it take to leave space for multiple accounts in our architecture though we would only ship with user-facing support for a single account?
- How likely is it that we’ll have to support iOS State Restoration, and would this be impacted by our chosen account plan?
- What else haven’t we considered, and is any of it risky enough to require addressing now?
And the key points during that discussion might be:
- We have no idea how likely it is we’ll need to support multiple accounts. All we know is it’s not currently required.
- If we think we’ll never have to support multiple accounts, one option is to provide global access to a singleton instance of an account.
- If we suddenly have to support multiple accounts and we’re using a singleton instance fixed to one account, that requirement change would be very painful to support.
- Passing an isolated account via dependency injection instead of providing a globally-accessible singleton instance would be comparatively easier to migrate to a multiple-account setup.
- Passing an isolated account via dependency injection would have a trivial impact on overall level of effort in a single-account application.
- Dependency injection could conceivably make supporting iOS State Restoration harder as that API is based on isolated view controllers re-instantiating themselves via NSCoding. Passing references to specific account instances during or after state restoration is considerably more complex than if restored view controllers had immediate access to a global instance during decoding.
Please note I’m not arguing for one way of the other here. I’m merely sketching out some terrain over which such a discussion might traverse.
In the end there’s always risk. You make the best choice you can given the information you have. I recommend discussing at length both the near and medium term before comitting to a near-term plan. All too often, these discussions either don’t happen or they happen in a rush and so risks aren’t considered to the full extent that it is possible to consider them given current knowledge.
That last line is the bad habit that rubs me wrong:
the risks aren’t considered to the full extent that it is possible to consider them given current knowledge.
This is the point of the quote from that ICU physician I admired so much. You always have more time than it seems like you do. You always have time to consider the impact of what you know and what you don’t, even if you choose not to address any of the risks up front, even if the outcome of that consideration means cutting huge corners. At least the risks you’re taking are calculated.
I could write a book about The Leftovers, or somebody could, maybe not me. I can’t recall a show that so deliberately avoided answering its own questions and still managed not to blot out any of the emotion or struggle of its characters. There’s so much to praise, to expound upon, but that’s not something I can contribute. I will add this note, however: the Sudden Departure is a metaphor for our world, for the cosmos: undeniably miraculous, unspeakably violent, raising endless questions which it answers only with infinite silence.
Tune in Next Time for “Last-Minute WWDC Comments” or “Apple Isn’t Doomed to Fail, But Their Future Doesn’t Look as Rosy as Their Past”
I’ve been thinking a lot about Apple’s biggest success stories. The products that mattered all rode currents outside of Apple’s control:
iMac: Internet is ready to spread into every home, but getting a computer and getting it set up (virus free, connected to your printer and your new digital devices like cameras) is too onerous. Apple comes along with a cute box that’s plug and play. Everyone got what they want: ISPs, device manufacturers, customers, and Apple.
iPod: Music industry worried about piracy, customers want digital music and are willing to pirate to get it. Apple comes along with DRM-protected store and player that makes it more convenient to buy then to pirate. Everyone got what they want: publishers, customers, and Apple.
iPhone: carriers are finally ready(ish) for mobile internet, but the phones suck. Apple comes along with the right hardware and OS and UI. The give the carriers a reason to charge all their customers more money, and customers a reason to feel comfortable becoming hardcore users of a new kind personal computer, let alone enjoy the life-changing benefit of carrying the internet in your pocket. Everyone got what they want: carriers, customers, and Apple. Not to mention all the industries ubiquitous smartphones made possible.
Considering Apple’s more recent projects against the market currents we see today, the picture is gloomy:
TV: Customers want content, and they don’t care whether it’s from the Netflix app on their phone or the one bundled in their Samsung TV. Apps on Apple TV don’t make that content meaningfully better, and no industry partners rely on Apple to deliver those apps.
Watch: Customers are probably over-served by current smartphones. Nothing has changed about daily life that makes wearing a watch more important than it’s been in the past. No industry partners are relying on Apple to deliver a watch. This is a niche market.
iPad: Outside of certain niche jobs, iPad doesn’t provide enough productivity gains to be worth the tradeoff in overall simplicity. The decline of paid productivity software means would-be industry partners that might otherwise rely on Apple to deliver the iPad (and which would make the iPad a compelling device for customers) are drying up or moving to SASS models that are platform agnostic.
So what does that leave?
AR/VR: Outside of niches like gaming and enterprise needs, are there any sea changes in this space that we can’t yet foresee? This is the area where I see Apple being most able to make a new contribution.
Cars: Customers are going to have their lives changed by fleets of safe, convenient self-driving vehicles. All the industry partners that will spring up around those networks are going to rely on whoever delivers those fleet services to exist. This will be a huge space. Too bad Apple seems to have fumbled the ball, if the rumors are true.
AI: If this space ever achieves the promises suggested in science fiction, customers will love the convenience offered by intelligent assistants. But Apple is simply not structured to get there more quickly or more effectively than their competition at least in terms of software with “soft” interfaces (like voice).
As an Apple fan, this is pretty depressing. My expectations for today’s announcements are very low.
My ideal politician would believe something like this:
Any time a bill crosses my desk, I ask myself two questions. Will this help a smaller business compete against a bigger one? Will this help a family recover from that competition? If the answer to both those questions is no — I won’t vote for it.
The reality is we have one party that routinely ignores the first question, and another party that gets it’s rocks off ignoring both.
Some designs I receive from clients call for specific weight and family combinations for user interface fonts. Use a display font here, and a text font there. This is true of both custom fonts and system-provided fonts. Apple seems to encourage this level of finesse with system fonts in their online design resources. There are several distinct families in the downloadable San Francisco fonts available on the Apple developer site.
When requesting a system font in code, the UIFont returned will either be a display or a text font, depending on the API you use. For the
systemFont(ofSize:weight:) API, the cutoff is in the neighborhood of 22 points, above which you’ll get a display font and below which a text font. Other APIs like
preferredFont(forTextStyle:) might return a display or a text font regardless of point size. This is great for general purposes. An inexperienced designer can use whichever API is closest to their needs and receive a font whose family is a best-fit for that size, weight, and the characteristics of the current display.
However there are times when an experienced designer needs to specify not just a weight and a point size but also a family. In those situations, it is difficult for the developer to satisfy the design requirements. If you’re using a custom font bundled with your application, then you can create a font descriptor quite easily:
let descriptor = UIFontDescriptor(fontAttributes: [ UIFontDescriptorNameAttribute: MY_FONT_NAME ])
If you need to do this with the San Francisco fonts, it is much more difficult. There are only two ways I know of to do this.
You can make some educated guesses about the behavior of the UIFont APIs and do something like this:
let textFont = UIFont.systemFont(ofSize: 6) // .SFUIText 6pt, used for font name let titleFont = UIFont.preferredFont(forTextStyle: .title1) // .SFUIDisplay-Light 28pt, used for point size let descriptor = UIFontDescriptor(fontAttributes: [ UIFontDescriptorNameAttribute: textFont.fontName ]) let desiredFont = UIFont( descriptor: descriptor, size: titleFont.pointSize) // .SFUIText 28pt, combo of desired traits
Very Yucky Hack
A more reliable but even yuckier solution: you can bundle the downloadable fonts from the developer site, and reference them by name in a font descriptor:
let descriptor = UIFontDescriptor(fontAttributes: [ UIFontDescriptorFamilyAttribute: "SF UI Text", UIFontDescriptorFaceAttribute: "Semibold" ])
This bloats the size of your app bundle but at least you can guarantee that you’re always using a display or text font when you need one or the other.
Why Does This Matter?
To illustrate why this matters, please consider the following example. While the anecdote below is partially contrived, it’s representative of the kind of design problems I’ve had to solve on actual projects. I think it serves as a useful illustration of the shortcomings of the current UIFont APIs.
Consider a tvOS app with a horizontal collection view of cards. As each card takes the center spot, it receives focus and enlarges to about twice its unfocused size. Each card contains some body text.
Lets assume for the sake of this example that a variation of
SF UI Display will be vended from whichever system font API is used to request a 44 point regular weight font.
Which font family would actually be preferable for the text on these cards? The answer is not straightforward. In the enlarged state, a display font might make sense. At a size of 44 points in the default expanded card state, a text font might look awkward in comparison. A 44 point size is probably large enough to accommodate the characteristics of a display font:
But on the other hand, for every one focused card there are always four unfocused cards (two on either side) visible at all times. If the unfocused card contents are scaled down via a scale transform, then whatever font is used at the focused size (with an identity transform) will be scaled down to a perceived “22 point” size. Note that the text is still laid out using a 44 point display font, but the unfocused card transform results in a perceived size that is much smaller. At this scaled down size, a display font would be harder to read, for example in words like “clipper” or “sail” where lowercase L’s are adjacent to lowercase I’s — especially so when staring across the room at a grainy television.
So what alternatives are there?
One alternative would be to request two fonts: one to layout the text in the focused state, and another when laying out the unfocused state. The problem here is that if this results in a display font at the 44 point size and a text font at the 22 point size, the line fragments will most likely break at different words between the two states, creating confusion for the user during focus update animations:
The more desirable alternative for this design is to always prefer a text font, since this preserves the maximum amount of legibility at all focus states:
We would use a scale transform to downscale the 44-point identity-transformed text to a perceived 22-point size, preserving legibility in the both the focused/enlarged state and the unfocused/scaled-down state, and without disturbing the line breaks during the transition from one to the other.
Wishlist For Apple
If anyone from Apple is reading, I’d love if ya’ll can add programmatic access to family-based font selection of system fonts:
let bigTextFont = UIFont.systemFont( ofSize: 44, weight: UIFontWeightRegular, family: UIFontFamilyText )