Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- 📸 Immich: The alternative for Apple Photos and Google Photos. I've enjoyed it so far. The face-detection AI for grouping photos by person is a slick feature.
- 📺 Tube Archivist: Designed for backing up YouTube videos. I was previously doing this manually with yt-dlp, but this app sits on top and enables a very clever way of managing multiple downloads with plenty of automation baked in. Includes an interface for watching, as well. One of my favorites!
- 🪼 Jellyfin: Plex is the popular choice for media streaming, but I was interested in going forward on all OSS, so Jellyfin was the pick for hosting movies, shows, and my music library. In addition to the web client, it comes with native mobile apps that support downloading locally. I... don't actually watch a ton of movies or shows. But it's been great for hosting my music library!
- 📖 Kavita: I gave BookLore a try, but it seems to be collapsing rapidly. As a comics enjoyer, I liked the looks of Kavita. It groups books within a series rather nicely. And, of course, handles books of words just as well.
- 🐼 Monica: Monica is a CRM for friends and family. Having a personal database is a great container for keeping track of the people that matter most in your life. For a while now, I've taken Derek's advice on this practice, and Monica is actually one of his recommended solutions. Have only used it lightly so far, but it's been the most fulfilling hands down.
- A food item is a task
- Shoppers are a collection of tasks that are scanned via queues
- Cashiers (and self-checkout machines) are workers
- Individual cashier lines are queues
- Human cashiers vs self checkout represents prioritization queues
- Customer Service is, as mentioned, a dead-letter queue
- Reading headlines on LifeHacker about a neat new service making a splash across the pond called Spotify — where you can legally stream a seemingly infinite library of music.
- From that same evening: ripping Plastic Beach by the Gorillaz onto my computer, loading it into my iTunes library, and transferring it to my iPod.
Ursula's Painting
From The Art of Kiki's Delivery Service, Hayao Miyazaki on Ursula's painting:
“It doesn’t matter what Ursula paints as long as it’s spirited. Given how her painting is thematically related to the film, the actual paintings had to be powerful. The paintings convey the life of a secluded female artist more than they do some message. I was looking forward to drawing them myself once I was done with the storyboards [laughs]. When I couldn’t afford to do so, I recalled the print, ‘Ship Flying Over the Rainbow.’ The print was made by a teacher at a school for the disabled, Hachinohe City Minato Special Junior High School. We obtained permission from the instructor and added a face to the original illustration. Replacing the horse’s face with Kiki’s would have been inconceivable.”
The original illustration wasn't colored, so credits in the film show:
"Ship Flying Over The Rainbow"
Painted by students of Hachinohe City Minato Special Junior High School Handicapped Children's Class
Lovely.
99% Perspiration
From Walt Stanchfield's Gesture Drawing for Animation, a favorite around here:
It all starts with preparation, which is the “open sesame” of all genius. Even the geniuses admit it’s 99% hard work and 1% genius.
In context, this is talking about capturing "The Essence" of an image. For the layman to a craft, it's what they would imagine is most of the work going into the piece. The emotion, the story, the idea.
Starting to learn drawing a few years ago, I thought ideas would be the hard part. Turned out that I had ideas pouring out of my ears! Enough so that I couldn't keep up with them all.
And so the idea is the easy part. Most of the time is spent in the trenches, working with craft.
A gesture is accomplished through a complex array of skills working in tandem: composition, perspective, anatomy, construction, expression, draftsmanship, inking, shading, and value. And each of those are sophistications unto itself.
This is, mostly, relieving. Craft can be improved, and craft is much more sustainable to work at over a long stretch of time. I find craft to be grounding; it's the thing that aligns you with the beauty of it all, and it's the way we become the image, the piece, and so on.
I write this as I'm slowing the pace of output on this ol' dot com so I can spend more time learning and honing craft. I'm trading finished works for etude books and study material. It's a quiet learning sabbatical, with occasional transmissions from the underground where the roots are being laid. With time, some nice fruit should bloom from it. But until then, on with craft.
Manus — Straight Jig
Getting jiggy with it!
Forays Into Home Lab Apps
I'm dipping my toes into the rabbit hole of Home Labbing (or, perhaps, am tripping down the loose gravel that is rapidly dropping me down into the vast expanse of it!)
I've seen discourse around Synology setups, home NAS systems, and really sophisticated data back up stragies. This just seemed right up my alley — the intersection of managing apps, data, and oh — by the way — that data can include all sorts of stuff I care about: my ripped CDs from the 2000s, my collection of ebooks, and pictures from all of my vacations??
I'll hit the highlights — my laughably minimal setup as I'm just getting started, the apps I'm running, and my two cents on the experience so far.
The Setup — Keeping It Simple
So far, the number of people who care about this in my household is 1 — me. I have little need for the N in NAS (Network-Attached Storage) and am having fun setting things up on my laptop.
That said, a DAS (Direct-attached Storage) box is out of the budget while I'm just experimenting. A NAS can run $1,000, while a DAS trails behind slightly. Not the price point for a maiden voyage.
Those deeply entrenched in this hobby may scoff at this, but my current solution is — drum roll please — an external hard drive. I simply slap this into my computer directly, run all my apps on my machine, and then access them on the same machine.
This skips over a lot of what looks fun for a maturing setup: building a machine, ordering parts, setting up the network, getting an Uninterruptible Power Supply, etc. But I was starting to get educated with all of this only to realize it was keeping me from getting started in the first place.
So yes, I'm excited to eventually have hardware to talk about and neat networking approaches at a later time. But for now, keeping it simple is keeping me moving.
Worth saying: I have employed for a while now the 3-2-1 backup strategy. I have a primary drive, a mirrored drive, and have the primary backed up to the cloud via BackBlaze. Wes Bos gives a great pitch for why he uses BackBlaze on Syntax.
Now is also a good time to shout out FreeFileSync, a handy bit of OSS that I've used since 2010. This helped with migrating files and currently makes keeping drives in sync painless. It's manual, but it still works wonderfully. I've also picked up DaisyDisk for getting a visual on what's taking up how much space on any given drive.
Apps!
Pretty much all of these are run as Docker Containers on my machine, keeping themselves contained and incredibly simple to set up. I'm just wrapping up getting each populated with data, and for the most part, all you need to do is point it to the directory you want data pulled from, and then you're done. There was no issue with having the app on my machine and the data on another drive, so long as it's connected, of course.
Here's what I've got so far:
How It's Going So Far
This has been supremely enjoyable on many fronts!
Parsing data, collecting it, reorganizing it, and digging deep into my own personal archive has been naturally nostalgic. Plenty of "wow, I forgot all about this!" arose along the way. This goes for my personal photos, music collection, and even YouTube series that I previously enjoyed. Tending the garden allows us to "live twice," enjoying the highlights all over again.
Data loss is something I've been burned by before. Growing up, my own technical experiments pushed our Windows 98 machine to breaking a few times, to the point where the only solution was a clean install. On the subject of YouTube in particular, I've been frustrated to see videos disappear for one reason or another. Setting up this homelab of sorts only further solidified my own backup strategy, and now more easily facilitates backing up other ephemeral media.
On top of that — I'm much more likely to actually engage with what's stored as well! These apps provide an enjoyable interface that actual keeps me coming back. A huge improvement from finder!
And, of course, as a technologist, the setup is the most fun part. Along the way, I've spun up Docker Containers, written migration scripts, used new CLI apps, and just had a ball on the computer! It's exciting (daunting? terrifying?) that there's much more to do in the realm of hardware in this space, and I'm looking forward to branching out that way when the time comes!
I share this not to impress anyone with how tricked out my setup is (clearly), but to encourage you. If you're looking to get going, you don't have to wait for hardware. Roll up your sleeves and start tinkering!
Faber — Phantom of the Keys
Nothing says spooky like the chromatic scale!
Monotasking with AI Agents
I've been seeing great gains in one open window for AI coding assistant like Claude Code, but then heard that the real productivity boost was to have multiple in-flight. The velocity gains seemed exciting, and there was ample downtime between prompt and response. So I decided to give it a whirl over a couple of weekends.
I tried this with terminal tabs first, then dabbled in UI's that build around this philosphy such as Conductor, and even ogled at the wildness that is Gas Town. (Maggie Appleton is recommended reading on this wild speculative experiment towards agent orchestration).
If you haven't given this workflow a go, I have to warn you that it is initially exhilarating. The jumping back and forth between tasks and typing into being solutions and features is addictive. There's an intoxication to see work that took ages being trivialized in parallel.
Naturally, though, this way of working has tremendous tradeoffs. First — throughput becomes the priority over quality. When there are other tasks waiting to be addressed, a given task that requires pause leads to impatience, making it all the more tempting to skim code generated, gloss over details, and obviously make mistakes. So more is done, but worse than before.
There's, of course, our mental wiring as well. There is plenty of research out there supporting that our minds are terrible multitaskers. We do not truly run multiple threads — it's all one thread, but with the ability to be interrupted and switch tasks quickly to give the illusion of multitasking. This can be stimulating (hello social media), but it is also incredibly expensive. So this mode of working is actually more demanding, not less — a complete reversal to how AI products get marketed. This means being absolutely exhausted quickly. But, because the next prompt always contains the chance of a high reward with low effort, you want to keep going. Then, much like above, you are pressured into poor choices and skimming because of the fatigue on top of the throughput prioritization.1
I found myself especially susceptible to this — when looking for more to put in flight, I ended up adding tasks and features that were low impact. So I might have a medium-priority task in flight, and be working on a smaller feature on the side. But, eventually, that small feature that was only supposed to take a few minutes ballooned into a larger problem that spiraled. Now I'm stuck working on both, and that additional feature likely didn't even need to be on my plate in the first place. The cost to onramp is low, so the complexities were underestimated against the more trivial priority.
So, when working with AI coding agents in this way, I was developing brittle features and fixes, spending more time on low-priority tasks that took away from my high-priority ones, and I was more exhausted than ever before from development. This experiment was clearly a failure.
A major concern many engineers have around developing with AI assistance is a less thorough understanding of what they are developing. Hence the term vibe coding. This approach only exacerbates the issue. Forget being rusty with JavaScript — I wasn't retaining the attributes of the features I was working on as a result of this practice. A scary position!
This is not all that different from work modes before agentic coding tools. Monotasking and Deep Work are long-advocated principles that have always worked, and the introduction of the shiny new tool makes no difference here.
The alternate title to this would be "I Tried Multitasking Developing With Claude Code So You Don't Have To". So beware the temptation! The time when the tools are processing your prompt is better spent thinking further on the problem. These tools, while trying to lead with generation, are still most useful to me when I spend ample time using them to think more deeply and efficiently around a problem rather than over-delegating to them.
Even further still — it's better to ignore the hype of 10X productivity that's seen buzzing around, and instead aim for just a bit more efficient for greater sanity and ownership over the work at hand.
1 Steve Yegge, developer of Gas Town, returns to warn of this exact thing in The AI Vampire.
Queues of Queues at the Grocery Store
There's an idea that's tickled me lately — the idea that a queueing problem can sometimes be tackled by employing... more queues!
Looking at the world around us, in a small retail store, one cashier and one queue can suffice. But in a grocery store, that is typically insufficient. During peak hours, it takes multiple cashiers to check out customers with their own individual queues.
Interestingly, they are also self-organizing thanks to individual customers having autonomy in the line that they chose. A customer can look at all open cashiers at a glance, see how many people are in line, and even further — can see which ones are being clogged up (by customers with large baskets, customers paying cash vs card, customers searching glacially for all the coupons they clipped that morning).
Most stores have an element of prioritization based on the workload as well. Cashiers are great at handling shoppers purchasing food for the week. Self-checkout, which has its own queue, is best suited for those with < 15 items. There is an entirely separate queue for these shoppers.
You could get granular here — each shopper within their cart ultimately constructs a queue of items that are scanned in an arbitrary order.
There's even a dead-letter queue in many humble grocery stores! My local stores have a customer service desk where you can communicate with a manager who has the ability to review your order, what went wrong, and has extra permissions to resolve an issue.
This all, of course, maps to software.
In terms of systems:
Is it overly obvious that I thought about this while standing, perhaps too long, in line at the store this weekend?
Studying Planes of the Head
Giving my +1 for Marco Bucci's Understanding & Painting The Head! Thorough without being bogged down in the weeds, classic Bucci teaching. Learned a lot from it!
Art & Fear On Approval
And why making with or without it takes courage:
You're expected to make art that's intimately (perhaps even painfully) personal — yet alluring and easily grasped by an audience that has likely never known you personally.
When the work goes well, we keep such inner distractions at bay, but in times of uncertainty or need, we begin listening. We abdicate artistic decision-making to others when we fear that the work itself will not bring us the understanding, acceptance and approval we seek. For students in academic settings, this trouble is a near certainty; you know (and you are correct) that if you steer your work along certain paths, three units of "A" can be yours. Outside academia, approval may be clothed in loftier terms — critical recognition, shows, fellowships — but the mechanism remains the same.
With commercial art this issue is often less troublesome since approval from the client is primary, and other rewards appropriately secondary. But for most art there is no client, and in making it you lay bare a truth you perhaps never anticipated: that by your very contact with what you love, you have exposed yourself to the world. How could you not take criticism of that work personally?
Beethoven – Rage Over A Lost Penny
Will this piece still be funny now that pennies are no longer being produced?
Golden Ratio In Film
Western music is founded on a phenomena in nature. Certain relationships between frequencies create a harmony, a certain sweetness, where there is little dissonance. Art seems to have a similar counterpart in composition through the Golden Ratio.
I've been having fun crawling through the One Perfect Shot Twitter account to find film screens of this composition principle in action. A few of my favorites below:
A Lifetime of Music Listening
In short — I made a new webpage to chronicle my listening history. Woohoo!
In the distant past of 2010, I distinctly remember two things:
A year later, I was shocked by the instantaneousness of having access to all music ever recorded.
15+ years later, I find my relationship with listening to music vastly different from those days. Teen-specific sentiments aside! I'm listening to a broader range of music, going deeper into sub-genres, and have heard international artists I otherwise would have never been exposed to. That's all good and well!
At the same time, I find myself not listening as deeply, not listening to whole albums, and allowing a large portion of my listening to be guided by unseen forces. Perhaps the pace of life has changed this, perhaps age, perhaps the coming and going of fascinations and interests.
Whatever the reason, I had a moment ripe for reconsidering my listening habits and the technology that supports them.
CDs
Visiting my childhood home, I found stowed away a bunch of physical CDs that I hadn't thought of, let alone heard, in ages. Tucked in jewel cases, some in plastic sleeves from friends who burned playlists for me, nostalgia ensued. I popped several into a physical CD player. (One of those big 2000s era boomboxes was available. After that, I popped one into my car. Being built in 2016, it still has a CD player.) I felt the nearly forgotten anticipation of waiting for the music to start, the tactile satisfaction of placing media into it's reciever.
Seeing a natural trail of my listening history in physical form, I realized this sort of thing was too important to me to leave it to chance, tech companies that come and go, etc. I enjoyed being curious about what caught my ear, collecting it as a source of inspiration, and saving it in a way that it can serve as a time capsule later. There's a seeming urban legend about how Andy Warhol would wear a different perfume daily for a couple of months so that he could then remember that point in time. Music has a similar effect; it's a great memory maker.
Unfortunately, physical CDs are not the answer. I found out all too soon that my favorite Japanese City Pop CD's would be $30 each + international shipping. Too many recent favorites don't do print runs of their music. Even so, the practical reality is that most of my listening is done through apps. A physical purchase would be reserved for special albums, not for the entire collection.
It begged the question — what was I really trying to accomplish? If it were to be more intentional about my listening and saving my listening, the nerdiest answer I could come up with was a webpage. I had remembered those times curating my iTunes library intentionally, and I was suddenly interested in doing the same on my own domain.
The Data Wrangling
The development was largely trivial — the project's meat was in data collection and cleanup. I had to pop my physical CD's into a spreadsheet. But that only accounted for a fraction of my listening. It took crawling my iTunes library from ~2012 when I made the switch to streaming, as well as exporting my Spotify playlist data.
With that done, several rounds of lassoing data ensued. More memories came flooding back — it took considerable effort to keep good hygiene with an iTunes library. It's easy for naming conventions to get off, for the same artists to have different names based on stylization (ex: Arcade Fire and The Arcade Fire, or Tune-Yards and tUnE-yArDs), and for album art to be inconsistent. At times, it was frustrating. Now, though, it was the right amount of resistance to help me slow down and really savor the collection as a whole.
Tagging by year listened was a goal, as this was meant to reflect a timeline of listening. The 2000's required big ol' estimates, while for 2016 on I could use Spotify listening data.
While I was at it with my exported Spotify data, I decided to transfer over the top songs by year. This feature is universally beloved — I listened to my first (2016) playlist for years after the fact. Spotify doesn't cover the entire landscape of my listening, but it'll do. Those tracks were tossed into JSON and are now listed under /topsongs/{year}.
Violà! A full day's effort later, and I have 930 albums listed on my new Music Shelf page. I know that to most readers this is just a list of albums. But for the curator, this is a delightful wall of memories and favorite melodies that spans a mile long.
The page itself is simply a list with cover art, album, and artist. Of course, there's more I could do. Were I feeling particularly clever, I could whip up integrations with Spotify for further automation. BUT! The point is for this to be manually maintained. To have a process that even, ever so slightly, resembles the modern-day equivalent of getting into the car, driving to FYE, walking through the stacks, making a selection, bringing it home, and THEN putting it on the shelf. It allows time to live more deeply with the music.
Will this change my listening? It already has. Metaphorically, I've gone from only ever listening to the radio to finding my favorites and spinning them on repeat. Even without abandoning Spotify ("Yet you participate in society"), I have a reason to manually add albums, skip the MASSIVE UI encouraging I hop to the next listening destination right away, and get to know albums better. (Significant for me, since for a time I was almost exclusively listening to algorithmically suggested tracks before it got to be too much like the snake eating itself.)
The aim is to balance the best of both modernity and older practices. Like any choice around modern technology, our approach informs the tools we use and how we use them, not vice versa.
A Brief Explainer on Caching Strategies
The difference in speed between accessing data from disc and accessing from an in-memory cache is significant, by the magnitude of 10-100 in some cases. When serving data in our applications that is frequently accessed, we want to place a cache in front of the DB to increase speed and lighten up read access to disc.
How that cache is populated & read, however, takes careful consideration of requirements and trade-offs. Is it important for the cache to be highly consistent with the DB? Of the two, is it permissible for one to be eventually consistent? Do we want our application to manually manage write and read access to both data sources?
Following is an exploration of several strategies with their strengths and considerations:
Cache Aside (Lazy Loading)
Perhaps the most common pattern is to manage the cache strategy ourselves in the application with this gradual building of the cache. When requesting data, the cache is checked first for a given value. On a cache-miss, the application will then query the DB, and update the cache with the result.
Several fine benefits here: We have a high degree of control in the caching strategy — setting different TTL's based on certain parameters, custom caching keys, and our cache only contains data that is actually requested. With that control comes overhead — we will need to manage invalidation ourselves and, additionally, we must accept that the first request for a piece of data may be slow before the cache is populated with it. The tradeoff for an individual request having higher latency, however, is often worth it for overall higher throughput across the system.
Write Through
Here, if we wanted our application to allow for abstraction of our data writes, we could encapsulate that logic in the services that are responsible for our DB and cache interactions.
One such way is a write-through strategy. Here, we have separate services for the DB and cache. Our application is writing to the cache, which then updates the DB synchronously. This has a few tradeoffs: Slower writes since we are waiting for both the DB and the cache to finish their writes, and we may be storing data to the cache that is never read. The benefit, however, is that the cache is never stale, and the data is highly consistent. Ideal for financial applications where consistency is a requirement.
Write Behind
Similar to the above strategy, we are writing to the cache, and the database is updated from the cache. However, this is done asynchronously. This change will reduce the latency of our writes, though we are then at risk of potential data loss if the cache is down before a write completes to the database. This strategy is well-suited for logging systems and metrics where occasional data loss is acceptable in exchange for a high volume of fast writes.
Write Around
This is similar to our Cache Aside strategy, except our database service manages the writes to cache. We gain the benefits of simplicity while maintaining flexibility with what we cache. Ideal for rarely-read data such as historical logs.
Refresh Ahead
If your application is in a situation where you can predict spikes in traffic, a refresh-ahead strategy is advantageous. Here, this read strategy proactively refreshes data before it expires, rather than waiting for a miss to trigger a reload. The cache will monitor the TTL, and when reaching a certain threshold, it automatically fetches a fresh value in the background. If traffic spikes are predictable, a warm-up time can be set. This is also advantageous if it is very expensive to calculate the result, such as with dashboard data.
When To Use Which Strategy
Most API's can benefit from a cache aside strategy that allows for a high level of control and flexibility. Exploring which alternative is suitable is dictated by the requirements — Write through for high consistency financial apps, write behind for metrics monitoring, write around for archival applications, and refresh ahead for computation-heavy data such as financial aggregations. In many cases — an app will employ different strategies for different portions of the application depending on the services provided.
Western Railway
New Album – Phone Lines 📞 🎶 🐦
[phone rings]
Hey!!
I'm calling to let you know I have new music out — Phone Lines! It's a little Shibuya-Kei-inspired party. Enjoy some cutesy melodies and whimsey!
Anyway, my dog is calling me on the other line. Gotta go — talk to you later!
[hangs up]

