How I Organize a Swift File
As a professional developer, it’s my job to work with the code that I’m given, even if it’s not ideal or aligned with my own coding style. That doesn’t mean I can’t have my preferences and peeves. Sometimes I inherit Swift code that looks like this:
The main thing this code has going for it is that it’s terse, which can be a good thing for some Swift code. But there are problems:
MARK:comments aren’t uniformly applied, which makes it hard to tell at a glance when one section ends and the next begins.- Alternating use of single and double line breaks suggest incorrect impressions of logical groupings.
- There’s no overarching system to how the methods are grouped and ordered. Some are superclass overrides, others are custom methods, others are IBActions, etc. In order to know if a method is implemented, you have to read or search the entire file.
- Essential dependencies are exposed as read/write properties, even though they should only be set once. Most likely the only reason these properties are exposed as vars is because the view controller is initialized via a storyboard, which doesn’t permit custom init methods.
- Members aren’t given explicit access levels, so it isn’t clear to the reader which methods and properties are meant to be used by other members in the module, and which ones are just lazily defaulting to
internal. - Documentation-level comments use a mix of two and three slash formatting, and are placed at inconsistent locations.
When I encounter code like that, I try to clean it up:
What’s different:
- The code is separated into sections by member type and access level. Properties are all above the
initsection, methods are below it. Both the property and method sections are further divided (roughly) by access level: Public/Internal, Overrides, Interface Builder, and Private. MARK:headers are added at the top of every code section.- No more than one empty line is used between any two sections. No blank lines are placed between property declarations (except for those that have documentation).
- Everything that can be made private has been made private. This includes dependencies. Dependencies are passed in as arguments to a new static factory method which initializes and correctly configures the view controller from a storyboard. Interface Builder outlets and actions have also been marked private, since those should not be accessible outside of this class. Though this sacrifices the ability to use segues and storyboard references, the clarity and reliability gained via explicit “injected” dependencies far outweighs those losses.
- Documentation uses the style seen throughout the Swift Standard Library (three slashes, truncated to 80 character line lengths).
Here’s how it looks with some of the details above removed, in order to capture all code sections in one screenshot:
I don’t expect everyone to agree with my preferences. This is just what I like. But I think I can make pretty good objective arguments for the principles I’m trying to put into practice:
- Nothing is exposed to the module (or anything else for that matter) that isn’t expressly designed to be freely used at that access level.
- All external dependencies are explicitly required at (or near) init time, heavily discouraging (if not outright preventing) misuse.
- A consistent, logical organization is used when breaking up code sections, so it takes less effort to find a given method or property when you need to review it.
- Broader access levels are moved near the top so that the exposed API surface is easier to see without having to jump to a generated interface.
- Documentation uses platform-consistent formatting so it’s easier to distinguish from an implementation-detail comment.
Self-Driving Car Fleet Commercial: 2021
INT. BEDROOM - DAY
A teenage GIRL is sitting on her bed working on homework. Her head is bobbing to music bumping in her headphones.
A smartphone beeps from the bedside table. The girl picks it up. Chat bubbles appear in the air around her as she uses her phone.
MESSAGE FROM FRIEND
beach trip tonite?
GIRL (MESSAGING)
lets do it
CLOSEUP: PHONE SCREEN
The girl launches a brightly colored app on her phone. She presses a button that says “Day Trip”.
INT. BEDROOM - DAY
The girl swipes through her options on her phone. As she swipes, her dorm room dramatically swipes away, replaced by each destination, surrounding the girl like she’s being magically transported: downtown, nature hike, theme park. Each destination appears with a badge in the corner that reads “Travel time: N hrs”. She chooses a beach that’s two hours away. It looks like sunset at the beach.
CLOSEUP: PHONE SCREEN
A carousel appears with silhouettes of vehicles. The girl flicks through the available vehicle types. Each one has a badge in the corner that reads: “Up to N passengers” until she stops on an eight-passenger, fun-looking, rounded box on wheels. She selects it.
INT. BEDROOM – DAY
The bedroom has returned to normal. All these simulated smartphone actions are happening very quickly. You’re not meant to study them, only to perceive the gist of them as they whizz past.
A title hovers in the air, as if projected from her phone: “Invite Friends”. The girl taps her phone several times. Each time she taps, an avatar bubble appears in the air next to her. It’s all the friends she’s inviting on the beach trip.
The girl taps a big “Book It” button.
EXT. DORMITORY - AFTERNOON
A boxy-looking van pulls up outside the girl’s dorm. The cool-looking sidedoor juts open as the van rolls to a stop, electric motor idling. The girl’s FRIENDS are already inside, laughing, waving her in, bracelets jangling in the summer sun.
INT. VAN - AFTERNOON
Everyone inside the van is partying. A TV is playing a movie, or maybe a video game, or karaoke. All the seats face inward. There is no steering wheel, no bucket seats. The girl finds a spot on the wraparound bench seating and fastens her seatbelt like it’s a muscle memory.
EXT. BEACH - SUNSET
The van pulls up in the immediate foreground, perfectly-centered in our field of view. The sun is setting in the distance, so the van is heavily silhouetted. It’s shaped just like the silhouette we saw earlier in the smart phone app. The crazy door opens and silhouettes of the kids pour out of the van and onto the sand. As they run down to the water, the title appears in a bright, thin, white font:
U B E R
Future Imperfect
In an episode of the third season of Black Mirror, a woman pulls up to an electric gas station of the near future. It’s the wee hours of the night. Her electric car, a rental, has a low battery. It’s about to putter out. She goes to the — I’m not sure what to call it — pump and tries to plug in the charging cable. It won’t fit. She rummages through the trunk and can’t find a conversion cable. She begs the attendant, and then everyone else at the station, but no one has the cable. She’s stranded. She has no choice but to trek down the interstate on foot, hoping to hitch a hike.
That scene strikes me as precisely the bold future that we’re rocketing towards.
Examples abound.
Only one percent of Android users have access to the latest version of Android because carriers and manufacturers aren’t motivated to release them.
The next lightbulb you buy might be conscripted into an army of lightbulbs bent on bringing down the power grid on the eastern seaboard if its manufacturer isn’t obligated to use strong enough security measures. The few manufacturers that are using those standards might be too expensive for you or just won’t work with your other Things of the Internet.
Facebook for iOS still doesn’t use the native share sheet, which was released four years ago.
The new “TV” app for tvOS/iOS doesn’t include Netflix, arguably the most important streaming service, presumably because Netflix refused whatever terms Apple required.
There is no Amazon Prime Video app at all on tvOS, let alone in the new TV app. Comixology, another Amazon product, doesn’t sell comics in-app because the 30% markup on in-app purchases makes the idea a non-starter for Amazon. They can’t even link to the online store. You just have to know that there’s an online store where you can purchase the items, and that those items will magically appear in the app.
Google Maps, demonstrably the best mapping service in the world, can’t be configured as the default mapping app on your iPhone or iPad. Because of the ongoing competition between Apple and Google, it’s not even installed by default anymore1. Many iPhone owners will never discover just how much better Google Maps is.
Google’s speech-to-text recognition is fantastically good. Try it out in the Google iOS app sometime, and compare the same prompts with what you get from Siri. I’m talking about the difference between a barber jacket and a Barbour jacket. Google understands when you mean the latter. The default iOS keyboard has a voice recognition feature that isn’t nearly as good, but you can’t use Google’s speech recognition in Gboard, Google’s iOS keyboard, because Apple won’t allow them to access the microphone.
Five years and five operating systems later, Apple finally extended a public API for Siri, but it only works with six limited domains. You still can’t use Siri to put an item in the todo list app of your choice.
If you prefer Chrome over Safari on iOS, or the Gmail iOS app over Mail, you have to plod through the tedious procedure of launching those apps manually since system-wide features can only use the native apps.
Hilariously, you can spend $4299 dollars on a spanking new MacBook Pro and $969 dollars on an iPhone 7 Plus — both from the same manufacturer — but you cannot connect them together without a $25 conversion cable.
It is no wonder, then, that Google is staking their future on original hardware. There’s no way they can embed all of their fantastic services into a competitor’s device at a level that’s integrated deeply enough to be useful. The only way they can bring their AI assistant plans into concrete reality is to make their own phone. From a historical perspective, this is a pathetic waste of resources. Apple already makes the best hardware and software. The ideal smartphone would marry Apple’s hardware and software with Google’s services. But because of the intractable realities of competition and viable business models, Google has to reinvent Apple’s wheels in order to keep selling their own.
This is not a rant about the lack of open standards. Open standards don’t lead to a perfect user experience, either. The podcasting industry is, by tech standards, the wild west. But if you try sharing an episode with someone, they’ll be unable to hear it unless you share it using a podcast client that offers a proprietary browser player. New features in HTML and CSS are dependent upon mass market browser developers like Apple, Google, and Microsoft agreeing to implement spec changes. Their willingness to do so is a business decision, and could change in the future — <cough>AMP</cough>.
Please understand I am not suggesting that things could be any different. At least not practically. There are fixed points in business, law, and history that are determining our status quo. What I am saying is that our technological future is being shaped more by business constraints than engineering constraints. What is technically possible is outstripping what is feasible in the market.
The future is looking less and less like Star Trek and more like that woman in Black Mirror, thumb in the wind, begging for a ride.
-
By this I mean the original iOS Maps application used Google maps for it’s data, thus in a sense being installed by default. ↩
AsyncOperations
Today I’m open sourcing some utility classes I’ve been working on for the past several months. From the GitHub description: “A toolbox of NSOperation subclasses for a variety of asynchronous programming needs.”
Just Show Me The Source
AsyncOperation is the abstract base class used throughout.
AsyncBlockOperation offers simple asynchronous execution of a block.
AsyncTaskOperation manages multiple requests, passing a shared generic result back to all callers.
AsyncTaskQueue coalesces identical task operations so expensive work is only performed once.
Asynchronous NSOperations
Generally speaking, NSOperation makes it easy to chain together dependencies among multiple operations. Consider a sequence of NSBlockOperations:
let one = BlockOperation {
print("One")
}
let two = BlockOperation {
print("Two")
}
two.addDependency(one)
// Prints:
// One
// Two
But what happens if you have a block that must be executed asynchronously?
let one = BlockOperation {
doSomethingSlowly(completion:{
print("One")
})
}
let two = BlockOperation {
print("Two")
}
two.addDependency(one)
// Prints:
// Two
// One
There are at least two problems here. Of course our output is now printing in the wrong order, but notice also that there’s no way to cancel one after it has called doSomethingSlowly(). As far as NSOperationQueue is concerned, that operation has already finished, despite the fact that we haven’t yet received our result.
To solve both of these problems, we would need to change the behavior of NSBlockOperation so that it isn’t marked finished until we say so. Since we can’t change the behavior of that class, we’d have to write our own NSOperation subclass with that capability:
let one = MyAsynchrousOperation { finish in
doSomethingSlowly(completion:{
print(“One”)
finish()
}
}
let two = BlockOperation {
print("Two")
}
two.addDependency(one)
// Prints:
// One
// Two
Writing NSOperation subclasses is something every Swift developer should know how to do, but it’s still a pain in the a**. It would be preferable to have an abstract base class that subclasses NSOperation, adding built-in support for asynchronous execution in a way that can be extended for any arbitrary purpose. That’s what AsyncOperations aims to provide.
AsyncOperations
There are four classes in AsyncOperations:
AsyncOperation: An abstract base class that subclasses NSOperation. This class handles all the annoying boilerplate of an NSOperation subclass (including the KVO notifications around execution and cancellation). This class is not meant to be used directly, but via concrete subclasses. You can write your own subclasses, but there are two subclasses provided for you which cover common use cases.
AsyncBlockOperation: Similar to NSBLockOperation, except it only accepts a single execution block. The operation will not be marked finished until the execution block invokes its lone finish handler argument.
AsyncTaskOperation: This generic class provides support for associating multiple requests for a given result with a single operation. The shared result of the operation (of the generic
Resulttype) will be distributed among all the operation’s active requests. You can use AsyncTaskOperation directly in your own NSOperationQueues, or you can use it implicitly viaAsyncTaskQueue.AsyncTaskQueue: This generic class acts as a convenient wrapper around an NSOperationQueue of AsyncTaskOperations. It coalesces requested tasks with matching identifiers into a single task operation, so that expensive work is only performed once, even if it requested concurrently from isolated callers. A classic use case for this class would be in the implementation details of an offline image cache.
Examples
ImageCache.swift. A simplified version of an image cache that uses a private AsyncTaskQueue to coalesce concurrent requests for the same image into a single task operation, passing the resulting image back to all callers.
HeadRequestOperation.swift. A contrived example of a concrete AsyncOperation subclass, illustrating how a subclass must implement the required
execute(finish:)method. This class makes a HEAD request for an arbitrary URL, returning the result via a completion block.Blocks.swift. Simple example of how you would chain together AsyncBlockOperations using the standard NSOperation dependency API.
Implementing AVAssetResourceLoaderDelegate: a How-To Guide
TL;DR
See the code samples for all this on GitHub.
Meat & Potatoes
I’m writing a podcast app — I’m calling it ‘sodes — both as a way to let off steam and so that I can have the fussy-casual podcast app I’ve always wanted. Most podcast apps pre-download a full queue of episodes before you listen to them, and offer settings to manage how many episodes are downloaded, how often, and what to do with them when finished. ‘Sodes will be streaming-only. I think managing downloads is an annoying vestigial trait from when iPods synced via iTunes. I only listen to a handful of podcasts, and never from a place that doesn’t have Internet access. I’d rather never futz with toggles and checkmarks or police disk usage.
Most other apps do have optional streaming-only modes which, as far as I know1, are implemented as follows:
When a typical app streams an episode, the audio data is streamed using AVFoundation or some similar framework. Except for services like Stitcher, the original audio file is streamed in full quality. This streaming may only buffer a portion of the episode, depending on where you start, how far you skip, etc. Listen for long enough and it will eventually buffer the entire episode, byte-range by byte-range.
In parallel to the audio stream, a typical app also downloads the episode file in full and caches it locally. This is so the app can resume your place in the episode more quickly in a future session, or during the current session if your internet connection craps out.
In other words, even though you may be using a streaming-only mode, your app might be downloading the episode twice. It’s a little sneaky, but it’s a perfectly sensible compromise. If the parallel download succeeds it means the current episode won’t need to be re-buffered during a future session. AVFoundation does not persist streaming buffers across app sessions. Since it’s not uncommon for a podcast MP3 to be encoded at ~60 megabytes an hour, resuming playback from a cached file can dramatically reduce data usage over time, especially if it takes several sessions for someone to finish listening to an episode.
I could use that same dual-download pattern with ‘sodes, but I wondered if it would be possible to eliminate the need for a parallel download without also having to re-download the same streaming buffer with every new app session. After some digging, I found an obscure corner of AVFoundation which will allow me to do exactly that. There’s a protocol called:
AVAssetResourceLoaderDelegate
It lets your code take the reigns for individual buffer requests when streaming audio or video with an AVPlayer. When setting up an AVURLAsset to stream, you can set the asset’s resource loader’s delegate to a conforming class of your own:
let redirectUrl = {I’ll get into this below…}
let asset = AVURLAsset(url: redirectUrl)
asset.resourceLoader.setDelegate(self, queue: loaderQueue)
Your custom resource loader delegate is given an opportunity to handle each individual request for a range of bytes from the streamed asset, which means you could load that data from anywhere: from the network if you don’t already have the bytes, or by reading it from a local file if you do.
A proper implementation of AVAssetResourceLoaderDelegate is hard to get correct. The actual code you write needn’t be extraordinary. What’s hard is the documentation is spotty, the protocol method names are misleading, the required url manipulation is bizarre, and the order of events at run-time isn’t obvious. There are still aspects of it that I don’t fully understand, but what follows is a record of what I’ve learned so far.
Note: there are portions of AVAssetResourceLoaderDelegate that are only applicable to streamed media that require expiring forms of authentication. Those are outside the scope of this post since I don’t need to use them for streaming a podcast episode.
Basics of a Streaming Session
When you add an AVPlayerItem to an AVPlayer, the player prepares its playback pipeline. If that item’s asset points to a remotely-hosted media file, the player will want to acquire a sufficient buffer of a portion of that file so that playback can continue without stalling. The internal structure of the relationship between AVPlayer, AVPlayerItem, and AVURLAsset is not publicly exposed. But it is clear that AVPlayer fills its buffer with the help of AVURLAsset’s resourceLoader property, an instance of AVAssetResourceLoader. The resource loader is provided by AVFoundation and cannot be changed. The resource loader fulfills the AVPlayer’s requests for both content information about the media as well as requests for specific byte-ranges of the media data.
AVAssetResourceLoaderDelegate
AVAssetResourceLoader has an optional delegate property that must conform to AVAssetResourceLoaderDelegate. If your app provides a delegate for the resource loader, the loader will give its delegate an opportunity to handle all content info requests and data requests for its asset. If the delegate reports back that it can handle a given request, the resource loader relinquishes control of that request and waits for the delegate to signal that the request finished.
For our purposes, there are two delegate methods we need to implement:
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool func resourceLoader(_ resourceLoader: AVAssetResourceLoader, didCancel loadingRequest: AVAssetResourceLoadingRequest)
The first method should return true if the receiver can handle the loading request. The method name is confusing at first glance since it’s written from the perspective of the resource loader (“should wait”) as opposed to the delegate (“can handle”), but it makes enough sense. The delegate is returning true if the resource loader should wait for the delegate to signal that the request has been completed. It is from inside this method that the delegate will kick off the asynchronous work needed to satisfy the request.
The second method is called whenever a loading request is cancelled. This is easy enough to reproduce. If you start playback from the beginning of a file, and then scrub far ahead into the timeline, there’s no longer a need to fill up the earlier buffer so the request for that initial range of data will be cancelled in order to spawn a new request starting from the scrubbed-to point.
Both delegate methods will be called on the dispatch queue you provide when setting the resource loader’s delegate:
asset.resourceLoader.setDelegate(self, queue: loaderQueue)
I recommend that you use something other than the main queue so that loading request work never competes with the UI thread. I also recommend using a serial queue so that you don’t have to juggle concurrent procedures within your delegate.
AVAssetResourceLoadingRequest
The AVAssetResourceLoadingRequest class represents either a request for content information about the asset or a request for a specific range of bytes in the asset’s remotely-hosted file. You can determine which kind of request it is by inspecting the following two properties:
var contentInformationRequest: AVAssetResourceLoadingContentInformationRequest? var dataRequest: AVAssetResourceLoadingDataRequest?
If there is a non-nil content information request, then the loading request is a content info request. If there is a non-nil data request and if the content info request is nil, then the loading request is a data request. It’s crucial to note here that content info requests are always accompanied by a data request for the first two bytes of the file. The actual received bytes are not used by the resource loader.
My implementation of resourceLoader(shouldWaitForLoadingOfRequestedResource:) looks like this:
if let _ = loadingRequest.contentInformationRequest {
return handleContentInfoRequest(for: loadingRequest)
} else if let _ = loadingRequest.dataRequest {
return handleDataRequest(for: loadingRequest)
} else {
return false
}
I perform the work specific to either kind of request in those two private convenience methods.
Content Info Requests
Handling a content info request is straightforward. Create a URLRequest for the original url using a GET verb and set the value of the byte range header to the loading request’s dataRequest’s byte range:
let lower = dataRequest.requestedOffset let upper = lower + dataRequest.requestedLength - 1 let rangeHeader = "bytes=\(lower)-\(upper)” setValue(rangeHeader, forHTTPHeaderField: "Range")
You may wonder why I’m not using a HEAD request instead. I’m following Apple’s lead. Their engineers have their well-considered reasons. My educated guess is that if you request a byte range, the response header field Content-Range will contain a value for the expected content length of the entire file. This value wouldn’t be present in a HEAD response header. A range of two bytes is the smallest valid range, which helps avoid unnecessary data transfer.
Hang onto a strong reference to the loading request and the loading request’s contentInformationRequest. After receiving a response back from the server, you must update the content info request’s properties:
let infoRequest = loadingRequest.contentInformationRequest
infoRequest.contentType = {the content type, e.g. “public.mp3” for an MP3 file}
infoRequest.contentLength = {the expected length of the whole file from the Content-Range header, if present}
infoRequest.isByteRangeAccessSupported = {whether the server supports byte range access}
Warning: do not pass the two requested bytes of data to the loading request’s dataRequest. This will lead to an undocumented bug where no further loading requests will be made, stalling playback indefinitely.
After updating those three values on the content info request, mark the associated loading request as finished:
loadingRequest.finishLoading()
If you get an error when trying to fetch the content info, mark the loading request as finished with an error:
loadingRequest.finishLoading(with: error)
While your delegate is handling the content info request, it is unlikely that any other requests will be started. Your request could be cancelled during this time if the player happens to cancel playback. Since you’re holding onto a strong reference to the loading request, you should take care to cancel any URLSessionTasks and relinquish references to the loading request when it’s cancelled as well as when it’s finished.
Assuming you fetched the content info successfully, calling finishLoading() will trigger the resource loader to follow up with the first genuine data request.
Data Requests
For a given asset, the resource loader will only make one content info request but will many one or more data requests (instances of AVAssetResourceLoadingDataRequest). If the host server does not support byte range requests, there will be one data request for the full file:
let dataRequest = loadingRequest.dataRequest
if (dataRequest.requestsAllDataToEndOfResource) {
// It’s requesting the entire file, assuming
// that dataRequest.requestedOffset is 0
}
iTunes podcast registry will reject any podcast feed whose host server doesn’t support byte range requests. Thus in practice it’s probably hard to find a podcast host server that doesn’t support byte range requests. It’s not a terrible idea for a podcast-specific implementation of AVAssetResourceLoaderDelegate to always fail if you determine that the host server doesn’t support byte range requests. This will spare you the additional headache of handling the edge cases where either the full file is being requested or the length of the file exceeds the maximum length that can be expressed in an NSInteger using the current architecture (this can happen on 32 bit systems). See the documentation for AVAssetResourceLoadingDataRequest for more information about these edge cases.
Most of the time your data requests will be for a specific byte range:
let dataRequest = loadingRequest.dataRequest let lower = dataRequest.requestedOffset let upper = lower + dataRequest.requestedLength - 1 let range = (lower..<upper)
A simplistic implementation would make a GET request with the Range header set to the requested byte range, download the data using URLSessionDownloadTask, and pass the result to the loading request as follows:
let dataRequest = loadingRequest.dataRequest dataRequest.respond(with: data) loadingRequest.finishLoading()
A problem with this implementation is that the request doesn’t receive data progressively, but rather in one big bolus at the tail end of the URL task. The respond(with: data) method is designed to be called numerous times, progressively adding more and more data as it is received. AVPlayer will base its calculations about whether or not playback is likely to keep up based on the rate at which data is passed to the data request via respond(with: data). For this reason, I recommend using a URLSession configured with a URLSessionDataDelegate, and to download the data using URLSessionDataTask so that the data delegate can pass chunks of data to the loading request’s data request as each chunk is received:
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
loadingRequest.dataRequest?.respond(with: data)
}
When the URLSessionDataTask finishes successfully or with an error, finish the loading request accordingly:
loadingRequest.finishLoading() // or loadingRequest.finishLoading(with: error)
If the user starts skipping or scrubbing around in the file, or if the network conditions change dramatically, the resource loader may elect to cancel an active request. Your delegate implementation should cancel any URLSessionTasks still in progress. In practice, requests can be started and cancelled in rapid succession. Failure to properly cancel network requests can degrade overall streaming performance very quickly.
URL Manipulation
I’ve skipped over an important part of implementing AVAssetResourceLoaderDelegate. Your delegate will never be given an opportunity to handle a loading request if the AVURLAsset’s url uses an http or https url scheme. In order to get the resource loader to use your delegate, you must initialize the AVURLAsset using a url that has a custom scheme:
let url = URL(string: “myscheme://example.com/audio.mp3”) let asset = AVURLAsset(url: url)
What I recommend doing is prefixing the existing scheme with a custom prefix:
myschemehttps://example.com/audio.mp3
This is a non-destructive edit that can be removed later. Otherwise it would be more difficult to determine whether to use http or https when handling the loading request.
Your resource loader delegate implementation should check for the presence of your custom url scheme prefix when determining whether or not it can handle a loading request. If so, you’ll strip the prefix from the loading request’s url purely as an implementation detail, using the original url value when fulfilling the loading request. The resource loader doesn’t need to know that you’re manipulating the url in this way.
Warning: if you forget to modify the url scheme for the AVURLAsset, your delegate method implementations will never be called.
Special Implementation in ’sodes
My resource loader delegate for ’sodes will be optimized for podcast streaming. When it receives a data request from a resource loader, it will first check a locally-cached “scratch file” to see if any portions of the requested byte range have already been downloaded and written to the scratch file during a previous request. For any overlapping ranges, the pre-cached data will be passed to the data request. For all the gaps, the data will first be downloaded from the internet, and then both written to the scratch file and passed to the data request. In this way, I can download each byte only one time, even across multiple app sessions.2
As byte ranges are downloaded, I write the data to the scratch file using NSFileHandle. If written successfully, I annotate the downloaded range in a plist stored in the same directory as the scratch file. The plist gets updated at regular intervals during a download session. I combine all the contiguous or overlapping downloaded byte ranges when generating the plist, so that it’s easy to parse for gaps in the scratch file when servicing future data requests. The plist is necessary because I am not aware of any facility in Foundation that can determine whether a given file contains ranges of “empty” data. Indeed, an “empty” range might not even contain all zeroes. I take great pains to ensure that the loaded byte range plist is only updated after the data has been successfully written to the scratch file. I’d rather err on the side of having data the plist doesn’t know about, rather than the plist reporting that there is a range of data that hasn’t actually been downloaded.
GitHub
I’ve posted to GitHub a slightly modified version of the actual code that will go into ’sodes. You can see it here. It’s MIT licensed, so feel free to re-use any of it as you see fit. It is not intended for use as a re-usable framework since it’s heavily optimized for the needs of ’sodes. The Xcode project has a framework called SodesAudio which has all the AVFoundation-related code, including my resource loader delegate. There’s also an example host app that plays an episode of Exponent on launch. The example app has simple playback controls, and also a text view that prints out the loaded byte ranges that have been written to the scratch file. The ranges are updated as more data is received.
-
If you make or know of an app that solves this problem in a different way, I’m anxious to hear about it. ↩
-
If the user jumps around between multiple episodes, this will negate that effort. I could guard against this by keeping more than one scratch file around in the cache, but for now I’m only keeping a single scratch file around, so that I can minimize disk usage. Disk space tends to be more constrained on iOS devices than network bandwidth. ↩



