Cell Height Caching Dilemmas in Unread
The biggest performance challenge in Unread is calculating attributed strings and cell heights for the article summary lists. Those cells are practically just a UIView with some text drawn into it using the NSAttributedString UIKit Additions.
Once an attributed string or cell height is calculated, it is cached (in memory only) and re-used. This vastly improves scrolling performance, for obvious reasons. The challenge isn’t in caching per se, but in the fact that since Unread is an RSS reader, it is not uncommon for a single table view to contain 5,000 or more items.
In the current beta of version 1.2, when an article list is fetched from the local database, I have been pre-calculating all strings and heights on a background queue before popping back up to the main queue (in batches).
In versions 1.0 and 1.1, I didn’t pre-calculate anything. Instead, I used UITableView
’s new estimatedHeightForRowAtIndexPath:
optional delegate method. It isn’t ideal, but it worked reasonably well on slower devices without significantly increasing complexity.
There is a big drawback to this approach. As soon as a table view’s delegate implements that method, frustrating bugs crop up: tapping the status bar to scroll-to-top is wildly inaccurate, you can’t scroll to a target indexPath with reliable accuracy, etc. This is true even when I try returning an exact pre-calculated height from a call for an estimated height. This means I can’t preserve the semantic content offset when reloading a table view (i.e., when new stuff has been inserted above the current offset). The table view appears to jump around as new content is loaded, which is an irritating experience for the user.
So back to version 1.2. Pre-calculating works great—on newer devices. But older devices (iPods touch and iPhones 4S and earlier) have a hard time calculating 5000+ cell heights when viewing the “All Articles” screen. Even though they’re batched in, the entire sweep can still take 5 to 10 seconds or more. Not ideal. When calculations take that long, the window of exposure to potential race conditions is really wide. What if the user switches themes or font sizes during that interval?
Some Possible Solutions
There are several approaches that have come to mind to get acceptable results on all devices. Here are some solutions I’ve considered:
1) Write Heights to Disk
I could save the pre-calculated heights to disk, so that they’re only every calculated once. So the very first time a set of articles is ever loaded would be slow, but subsequent loads would only need to process the newest articles. But there’s a problem: cell heights change depending on the relative date stamp. Articles published today use the time (11:38 AM), articles published in the last seven days use the day and day number (THU 18), and so forth. So the caching strategy would require keeping track of the date the height was calculated, the current date, and the date the article was published. If I ever allow users to rename feeds, then every cached height for a given feed would need to be invalidated, too. What if that happens while the app is pre-calculating thousands of new articles on a background queue? Thinking of how I’d implement all this makes me queasy.
2) Redesign the Cells
I could write the heights to disk, but eliminate the need for date-based cache invalidation by redesigning the date stamps to never alter the flow of the other text. I really don’t want to do this. I’ve already spent an embarrassing amount of time experimenting with article cells. I’m happy with the current look. Besides, this wouldn’t solve the case where a user can rename a feed.
3) Revert to Estimated Heights
I could revert to the estimated height approach I used in versions 1.0 and 1.1, and just throw out the improvements like semantic content offsets when reloading for new data.
4) Scrolling-Based Batching
I could batch new articles into the table view as you scroll near the bottom of the existing content, thus only ever processing the articles you’re actually going to see. On the surface this sounds like the easiest approach, but note that this adds a new axis of interdependence between the model layer (objects that fetch and sort articles from the database) and the view controller layer. It also means having to think about what should and shouldn’t trigger a load-more. What if the app is restoring the previous interface state on the next launch, and the last-visible article was the 4,000th item in a long list of articles?
What I Think I’ll Do For Now
Since the problem with exact pre-calculation only affects older devices, I’m going to try Option 5:
5) Device-Specific Caching
Newer Devices • Newer devices will not use the estimated row height delegate method, but will instead pre-calculate all strings and heights on a background queue before popping back to the main thread to update the table view. Since I’m not using estimates, I’ll be able to preserve the user’s perceived content offset when reloading for newer offscreen data. Status bar taps will also be reliably accurate.
Older Devices • Older devices will continue to use theestimatedHeightForRowAtIndexPath:
method from versions 1.0 and 1.1. Heights will not be pre-calculated. This an acceptable trade-off between complexity and performance, since it’s limited to a minority pool of devices.
But how can I do that? As I stated above, merely implementing the estimated row height method introduces scrolling inconsistencies to a table view. I accomplish this via Dynamic Method Resolution. I provide an implementation like this in my table view’s delegate:
+ (BOOL)resolveInstanceMethod:(SEL)aSEL { if ([UIDevice unr_supportsPreCalculatedArticleCellHeights] == NO) { if (aSEL == @selector(tableView:estimatedHeightForRowAtIndexPath:)) { class_addMethod([self class], aSEL, (IMP) unr_tableViewEstimatedHeightForRowAtIndexPath, "f@:@@"); return YES; } else if (aSEL == @selector(tableView:estimatedHeightForHeaderInSection:)) { class_addMethod([self class], aSEL, (IMP) unr_tableViewEstimatedHeightForHeaderInSection, "f@:@i"); return YES; } else if (aSEL == @selector(tableView:estimatedHeightForFooterInSection:)) { class_addMethod([self class], aSEL, (IMP) unr_tableViewEstimatedHeightForFooterInSection, "f@:@i"); return YES; } } return [super resolveInstanceMethod:aSEL]; }
The delegate’s .m
file never actually implements the actual protocol methods. Instead, if the current device doesn’t support pre-calculated heights, I resolve those methods by adding custom methods at runtime using class_addMethod
. This works because the estimated height methods are optional. The super implementation of resolveInstanceMethod:
fails gracefully on newer devices. On older devices, my custom functions (prefixed with unr_
in the code above) will be called instead.
This approach requires the least possible amount of increase in complexity while still improving the app for some users. It also buys me some time while I continue to explore my options.