Rethinking Computering
With the ongoing iPad-ification of Apple computer systems, it’s time to rethink how to properly deploy modern compute devices.
TL;DR: Forget the internal SSD. Everything that is yours should be on a fast external bus.
Computering is the term I use for all the various things we now do with computers. We don’t just compute, we watch, we listen, we play, we chat—all of these things comprise computering.
I recently lost all my data on my Apple M1 MacBook Air (see The Day My MacBook Died). I have been operating under old assumptions about what a computer system is. That is, everything in a single box with the dearest bits backed up externally. The increasing reliability of hard drives and now SSDs have provided a false sense of complacency thus greatly reducing any urgency to perform this vital task on a regular basis.
Sure, I could have completely avoided my drama had I simply backed things up on a weekly, or even monthly, basis. But I didn’t. I was, as we all are in modern life, pre-occupied.
This engraving from the 15th century captures our modern life perfectly. Each of the gremlins tugging, sniping, pokingm, grabbing at St. Anthony represent all the distractions from our obligations, habits, needs, and desires we are bombarded with daily. Not to mention all of the external noise from media and advertisements.
Put your phone away: one gremlin gone. Leave the phablet/tablet at home: another gremlin gone. Turn off the TV: another gremlin gone. Stay off of social media: yet another gone. Don’t buy all that stuff they keep telling you to buy: several gremlins gone. Find quiet time for yourself and spend focused time with loved ones: even more gremlins gone. This is our modern life. All of these distractions are consuming us in unpleasant, unwanted ways.
Now that I have recovered much, but not all of my data, I’ve been reflecting on my assumptions about how I use computers and whether my thinking is outdated—it is. So now is the time not only to rethink my usage patterns, specifically how to preserve my data, but also to rethink how the computers I use should be configured.
Where we were
Computers developed from the model of a fast internal bus which connected the CPU and memory to all other I/O, storage, peripherals, and network. Moving data to and fro I/O devices, storage, peripherals, and networks were much slower because of the inherent limitations of those devices. It made sense, therefore to keep as much as possible internally connected to the system bus. Additionally, the overall system was made up of many individual components and interfaces to each of the subsystems.
Then, everything got faster, not just CPUs, memory, and the internal bus. Slow hard disks were replaced with fast SSDs. Networks got a lot faster. Connections to external devices got a LOT faster.
Where we are
A parallel development was the reduction of sub-components both for cost savings and for speed. This has culminated in the system-on-a-chip architecture where the CPU, memory, i/o subsystems and their controllers are etched into a single chip instead of many small, special-purpose chips. A single chip meant a lot less soldering, lower cost, and much higher performance. The only downside is if any part of the chip failed, the chip was basically useless.
One example of this architecture is the Raspberry Pi. For under $100, you can get a complete system board that has more compute power than computers of just 10 years ago. Hook up your own i/o devices and your favorite storage device and you have a fully usable system.
The evolution of Apple computer systems went from traditional lines of desktop and laptop systems to much smaller devices: iPods, iPhones, and iPads. These are all essentially systems-on-a-chip with no moving parts and few, if any, replaceable parts. In recent years, this approach has appeared in almost all of Apple’s computer systems; what I call the iPad-ification of Apple computers. Everything is on a single board or a single chip. No moving parts; with very few exceptions, no fans. Nothing to upgrade—the configuration is fixed. Very little to repair or replace. Most people never needed to change their initial configuration which made this approach viable.
This is attractive because of lower cost, higher performance, less energy consumption, low heat, and slimness/compactness of these designs. This is the computing landscape today. There is no going back to bulky, noisy, hot computers regardless how configurable, or repairable those systems would be.
Where we want to go
In this modern scenario where the SSD, or backing store, is irreplaceable, new thinking about the backing store and longevity of your data is in order. When any component of the single-board system fails, all data is forfeit.
The traditional solution is to have a back up process that is performed on a regular schedule. The backed up data may be to a server on the network locally or remotely (cloud) or to an external storage device connected directly to the system to be backed up.
The downsides of this are its regularity and its accessibility. There will always be a gap between the last backup and whenever the storage system fails. The system to be backed up must be accessible to the remote store. That means always on. Or it means an external device must be manually attached and then the process initiated.
Sure, there are other solutions, such as Apple’s Time Machine. But again, the backup device must be manually attached.
An alternate way to think about modern, irreplaceable internal storage is to think about the internal storage differently.
My new thinking
The approach I will adopt is to think of the internal SSD as a kind of level-X cache. Apps get installed there and launched from there as they currently do.
The system still uses it for page swapping (page swapping should ideally use much more resilient circuitry than the SSD).
Bootable storage backups done only occasionally, since apps are not updated frequently.
No data stored there, or if so, is only moved there temporarily and moved off as soon as possible.
All personal data is stored on a fast external SSD enclosure connnected via a fast USB-C (Thunderbolt 3 or 4) connection.
External data storage backup up on a regular, frequent schedule, depending upon activity.
My new dream machine
You can already get really fast NVMe SSDs enclosed in really fast controllers. These are ideal for Thunderbolt 3 and Thunderbolt 4 connects—present on all Macs since 2020. See How to Choose a Fast External SSD for Your Mac.
Now we need a MacBook to properly fit into this approach. Such a system would have
maximum possible RAM. 64GB to 128GB or more.
minimum possible SSD, allowing for apps and working space. 256GB would do for most users, more would be needed for photographers, graphics artists, etc.
Whatever CPU, standard, Pro, or Max, you may need for the tasks to be performed. In reality, the CPU doesn’t really matter except for the most CPU-intensive workloads; most people would do fine with standard or pro versions.
Pair such a system with external drive and we are nearly there.
If the system has 64GB or more RAM, I would really like to be able to configure macOS to use a part of that memory as a RAM-disk for system paging. RAM is designed to be re-written; SSDs not so much. In this way, the RAM can take the abuse of paging rather than beat up the SSD.
I looked today in the Apple Store; the closest thing I found to this configuration is a MacBook Pro 14” with an M3 Pro CPU, 128GB RAM, and 512GB SSD for about $4,500. A bit pricey. But that, I think, would be a beastie that would have a very long useful life.
I would rather Apple produce a MacBook Air M3 with 128GB RAM and 512GB SSD with a way to user-configure system swap space, but I doubt any of that will ever happen. Not only price, but cooling might be an issue.
<sigh>
One can dream …
posted at: 15:44 | path: /Computering | permanent link to this entry