<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xml:base="https://joethephish.me/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Joe&#39;s Blog</title>
    <link>https://joethephish.me/</link>
    <atom:link href="https://joethephish.me/feed.xml" rel="self" type="application/rss+xml" />
    <description>Blog posts by Joseph Humfrey</description>
    <language>en</language>
    <item>
      <title>Hour by Hour is here!</title>
      <link>https://joethephish.me/blog/hour-by-hour-released/</link>
      <description>&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/hour-by-hour-hero.jpg&quot; alt=&quot;Hour by Hour screenshots and icon&quot; /&gt;&lt;/p&gt;
&lt;p&gt;I finally released my iOS app!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://selkie.design/hour-by-hour&quot;&gt;Hour by Hour&lt;/a&gt; is a day planner that lets you sketch out your schedule with natural language. Write things like &amp;quot;Flight departs at 3:30pm&amp;quot; or &amp;quot;Set off 2 hours before&amp;quot;, and it figures out the timing for you. Follow along with Live Activities on your Lock Screen, share schedules with friends via iCloud, and more.&lt;/p&gt;
&lt;p&gt;It&#39;s free to download, with an optional one-time purchase to unlock unlimited schedules.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://apps.apple.com/app/hour-by-hour-day-planner/id6738743855&quot;&gt;Download Hour by Hour on the App Store&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;The origin story&lt;/h2&gt;
&lt;p&gt;I originally had the idea while flying to San Francisco for GDC two years ago. I was doing the classic thing of working backwards from a flight time — if I need to be at the airport by 9, and it takes an hour to get there, and I need an hour to get ready... I wanted an app where I could just type all of that out naturally and have the timing calculated for me. I&#39;d been mulling it over ever since, and I&#39;m thrilled it&#39;s finally out in the world.&lt;/p&gt;
&lt;h2&gt;Natural language&lt;/h2&gt;
&lt;p&gt;I&#39;ve always loved the natural language input in &lt;a href=&quot;https://flexibits.com/fantastical&quot;&gt;Fantastical&lt;/a&gt;, and Apple&#39;s own take in the Reminders app refined the idea further, with a clearer highlight and confirmation step. Hour by Hour builds on this — as you type, it parses your timing in real time and highlights what it finds. Times can be absolute (&amp;quot;at 3pm&amp;quot;) or relative (&amp;quot;20 minutes before&amp;quot;), and linked events update automatically when plans change.&lt;/p&gt;
&lt;!-- TODO: Video of natural language input --&gt;
&lt;h2&gt;Photo import&lt;/h2&gt;
&lt;p&gt;A feature I&#39;m particularly fond of was inspired by &lt;a href=&quot;https://bsky.app/profile/justmedevin&quot;&gt;Devin Davies&lt;/a&gt;&#39;s wonderful &lt;a href=&quot;https://crouton.app/&quot;&gt;Crouton&lt;/a&gt;: snap a photo of a physical schedule — a conference programme, a printed itinerary, a festival timetable — and Hour by Hour imports it and lets you follow along.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/hour-by-hour-photo-import.jpg&quot; alt=&quot;Photo import&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The icon&lt;/h2&gt;
&lt;p&gt;The gorgeous icon was designed by &lt;a href=&quot;https://matthewskiles.com/&quot;&gt;Matthew Skiles&lt;/a&gt;. I absolutely adore it, and Matthew was super professional and easy to work with. Check out his incredible portfolio!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/hour-by-hour-app-icon.png&quot; alt=&quot;Hour by Hour app icon&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Huge thanks to everyone who tested the app and gave me great feedback ❤️ especially &lt;a href=&quot;https://bsky.app/profile/chriswu.com&quot;&gt;Chris Wu&lt;/a&gt;, &lt;a href=&quot;https://bsky.app/profile/jon.inkle.co&quot;&gt;Jon Ingold&lt;/a&gt; and &lt;a href=&quot;https://bsky.app/profile/tal.by&quot;&gt;Tal&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 31 Mar 2026 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/hour-by-hour-released/</guid>
      <dateString>31st March 2026</dateString>
    </item>
    <item>
      <title>What if code wasn&#39;t a text document?</title>
      <link>https://joethephish.me/blog/visual-programming/</link>
      <description>&lt;video class=&quot;video&quot; autoplay=&quot;&quot; controls=&quot;&quot; playsinline=&quot;&quot;&gt;
  &lt;source src=&quot;https://assets.selkie.design/visual-programming/visual-programming-concept.mp4&quot; type=&quot;video/mp4&quot; /&gt;
  Your browser does not support the video tag.
&lt;/video&gt;
&lt;p&gt;You&#39;re looking at code, but it doesn&#39;t look like code.&lt;/p&gt;
&lt;p&gt;Functions are named in plain English - forget &lt;code&gt;snake_case&lt;/code&gt;, &lt;code&gt;camelCase&lt;/code&gt;, &lt;code&gt;PascalCase&lt;/code&gt;. A function is just called what it does: &amp;quot;Add two vectors.&amp;quot; You can expand it to read an English explanation of what it does, and the functions it calls appear as linked pills that you can expand to explore further. There&#39;s real code underneath - you can drill down to it when you need to - but the idea is that most of the time, you shouldn&#39;t have to.&lt;/p&gt;
&lt;p&gt;This is a prototype I&#39;ve been playing with. I&#39;ve reached the point where taking it further would mean a serious time investment, and I have other priorities, so I&#39;m putting it on ice for now. I&#39;m not releasing it, but that&#39;s exactly why I&#39;m writing about it today - I&#39;ve been thinking about this stuff for about twenty years, and if I&#39;m not going to build it into something real, I&#39;d at least like to get my thoughts down.&lt;/p&gt;
&lt;h2&gt;Will we be hand-writing code at all in 5 years?&lt;/h2&gt;
&lt;p&gt;How we write code is changing. How far and how fast depends on who you ask - it varies wildly by programmer, by project, by the day - but the direction is clear. AI is increasingly writing the nuts and bolts of line-by-line code. It ranges from &amp;quot;vibe coding,&amp;quot; where the programmer theoretically isn&#39;t looking at the code at all, to getting the AI to write small snippets that accelerate what you&#39;d write yourself, and everything in between.&lt;/p&gt;
&lt;p&gt;What seems clear to me is that pure vibe coding can often work for small toy projects, but at any real scale the programmer loses oversight, and things turn into a jumbled confused mess. It becomes impossible to refactor because what prompt could you possibly give the AI if you don&#39;t understand what the problem even is? So it&#39;s critical to keep an eye on what the LLM is building, how it&#39;s architected, so it fits with your vision of where your project is going.&lt;/p&gt;
&lt;p&gt;Either way, the programmer is moving into a &lt;em&gt;code architect&lt;/em&gt; position - thinking about structure and architecture more than the nitty gritty details - and for me this has always been both the hardest and most interesting part of the job. If you need to read and understand an LLM&#39;s code, it helps to have tools that support understanding architecture from the top down, rather than tools built around editing text files line by line, where functionality is scattered somewhat arbitrarily across files.&lt;/p&gt;
&lt;h2&gt;Twenty years of noodling&lt;/h2&gt;
&lt;p&gt;I&#39;ve been interested in this stuff for a long time. As far back as around 2005, I can distinctly remember opening up Photoshop while I was on my lunch break at &lt;a href=&quot;https://www.rare.co.uk/&quot;&gt;Rare&lt;/a&gt;, and I drew something that looked remarkably like &lt;a href=&quot;https://en.wikipedia.org/wiki/Scratch_(programming_language)&quot;&gt;Scratch&lt;/a&gt;. Turns out Scratch was invented before that so I can&#39;t claim any originality, but I was thinking along similar lines!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/scratch.png&quot; alt=&quot;Programming in Scratch using blocks&quot; /&gt;&lt;/p&gt;
&lt;p class=&quot;caption&quot;&gt;Programming in Scratch using blocks&lt;/p&gt;
&lt;p&gt;Scratch is interesting because it provides a contained, visual object-based programming model. But it&#39;s not visually dense and it&#39;s very slow to author. That makes it fantastic as a learning tool for children, and bad if you want to be really productive.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/shortcuts.jpg&quot; alt=&quot;Shortcuts screenshot&quot; /&gt;&lt;/p&gt;
&lt;p class=&quot;caption&quot;&gt;Editing in Shortcuts on iPhone&lt;/p&gt;
&lt;p&gt;Apple&#39;s Shortcuts sits on a similar axis - ostensibly accessible for non-technical users, but with a steep learning curve despite that. Simple concepts like conditionals and loops are extremely cumbersome, &amp;quot;functions&amp;quot; don&#39;t really exist (you create shortcuts that call shortcuts), and they&#39;re really slow to activate. People have done powerful stuff with them, and I&#39;d assume it&#39;s made logic programming more accessible to non-programmers. But it&#39;s extremely easy to hit the limitations. It feels telling that nowadays, I can&#39;t imagine something like Shortcuts being created nowadays - I&#39;d expect it would be replaced by some kind of consumer-facing vibe coding app with a friendly block-based visualisation of the output.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.inklestudios.com/inklewriter/img/screenshot.jpg&quot; alt=&quot;inklewriter screenshot&quot; /&gt;&lt;/p&gt;
&lt;p&gt;At inkle, we&#39;d already shipped one visual code editor: &lt;a href=&quot;https://www.inklestudios.com/inklewriter/&quot;&gt;inklewriter&lt;/a&gt;. And I&#39;d been thinking about more visual approaches to &lt;a href=&quot;https://www.inklestudios.com/ink/&quot;&gt;ink&lt;/a&gt;, our narrative scripting language. Our core users are often writers first, not programmers per se - they want the technical features only in so far as they help them achieve their interactivity goals. I liked the idea of keeping ink&#39;s power while providing a more visual representation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/visual-ink.jpg&quot; alt=&quot;visual ink editor&quot; /&gt;&lt;/p&gt;
&lt;p class=&quot;caption&quot;&gt;A mockup I made 5-10 years ago of what a more visual ink editor could look like.&lt;br /&gt;
I keep coming back to this dynamic 2D code layout idea!&lt;/p&gt;
&lt;p&gt;But I always liked the denseness of something more like what (the now defunct?) &lt;a href=&quot;https://vimeo.com/485177664&quot;&gt;Dion Systems&lt;/a&gt; were doing. Allen Webster and Ryan Fleury were asking the question: what if code&#39;s underlying representation wasn&#39;t text? Their code still &lt;em&gt;looks like text&lt;/em&gt; - it&#39;s laid out as if it&#39;s indented text - but internally it&#39;s... not. In technical terms the internal model is an Abstract Syntax Tree - a hierarchy of objects. The whitespace doesn&#39;t exist, the code is simply positioned according to its structure. The idea of tabs vs spaces doesn&#39;t exist; you can use a slider to set indentation however you like. Curly braces don&#39;t exist, they&#39;re just a visualisation; they could&#39;ve equally chosen to draw rectangles around blocks of code. Renaming a variable literally happens in one place even though it may be seen in multiple. This allows a huge amount of power.&lt;/p&gt;
&lt;p&gt;So: does code really have to be pure text with fun colours and refactoring tools as a layer on top?&lt;/p&gt;
&lt;h2&gt;My quick prototype&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/visual-programming-screenshot.jpg&quot; alt=&quot;Visual programming screenshot&quot; /&gt;&lt;/p&gt;
&lt;p&gt;I&#39;ve had this confluence of ideas brewing for a long time, so I decided to take a little stab at a prototype, with these aims:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Natural language first.&lt;/strong&gt; The first thing you see is English, not code. Functions are named in English - &amp;quot;Add two vectors,&amp;quot; not &lt;code&gt;addTwoVectors&lt;/code&gt;. If you like, think of it as &lt;code&gt;snake_case&lt;/code&gt; with the underscores removed, nicer capitalisation, and perhaps the odd (in)definite article. This has strong ties with &lt;a href=&quot;https://en.wikipedia.org/wiki/Literate_programming&quot;&gt;Literate Programming&lt;/a&gt;: the idea that the English explanation is primary, structured to be read and understood by a human first, and compiled by a computer second.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;An infinite canvas, not files.&lt;/strong&gt; Instead of code split across text files, it&#39;s a structured bag of objects with references between them, explored freely on a 2D canvas - like Sketch, Figma or Apple&#39;s Freeform. The code itself is presented more like the UI from a pro tool - Final Cut, Unity, Godot, Photoshop. It uses a lot of text, but it&#39;s not a text document at heart.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Spatial presentation.&lt;/strong&gt; Code is often thought about in terms of a top-down hierarchy: who calls who and who owns what. I don&#39;t know about other programmers, but I think about this spatially. In my usual IDE, I arrange the files I have open so that the tabs for important high-level controllers are on the left and smaller lower-level code progressively to the right. Similarly, I&#39;ve noticed different programmers have different approaches within individual files, with functions and call order often roughly going top to bottom or vice versa.&lt;/p&gt;
&lt;p&gt;My prototype makes this spatial thinking explicit. All functions are listed by code area on the left. On the main canvas, you start with the entry point. From there you spelunk through the code, exploring it top-down. As you read and understand what a function does, the functions it calls are brought in nearby. I&#39;ve been experimenting with options to put them to the sides or underneath. Functions that call it are above. Perhaps you should be able to pin and drag these pieces around to match how you&#39;re thinking about them. Perhaps the editor should remember your spatial arrangement, or perhaps it should be fully dynamic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Progressive disclosure.&lt;/strong&gt; You don&#39;t see traditional &amp;quot;actual code&amp;quot; by default. Functions show their English descriptions. Expand one and you see step-by-step explanations, that can be further expanded. Functions that are called appear as linked pills. Hover a pill to preview it on the canvas; click to open it permanently. You can open and close functions a bit like tabs.&lt;/p&gt;
&lt;p&gt;At the deepest level, you can see lower-level code - ideally presented in an elegantly terse block-based structure. If you want to edit by hand, it should be easy with keyboard input and good shortcuts, unlike Scratch or Shortcuts. But I&#39;d expect the most common way to make small edits is to select some code and use a short AI prompt to tweak, rearrange or refactor it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/visual-programming-blocks-mockup.jpg&quot; alt=&quot;Visual programming block code&quot; /&gt;&lt;/p&gt;
&lt;p class=&quot;caption&quot;&gt;Mockup of how visual programming blocks look at code level (no base layer JavaScript)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Under the hood.&lt;/strong&gt; The prototype wraps JavaScript, which is the actual code underneath. It&#39;s hackable, easily authored by an LLM, and great for prototyping in a browser. But the specific underlying language isn&#39;t fundamental to the idea. Internally, the English wording is written as comments within the JavaScript, then presented as the primary thing, with the code hidden unless you expand far enough. The JavaScript necessarily still has true function and variable names, used as unique identifiers internally. I&#39;d like to aim for these &amp;quot;true symbol names&amp;quot; to never be necessary for a human reader, though that might not be possible for ambiguity reasons?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;LLM generation.&lt;/strong&gt; For generating new programs, you prompt the LLM (currently GPT-5.2 Codex). It writes JavaScript, and my system prompt tells it how to structure the code with a specific comment format. I ask it to link references to other functions using markdown-style links, such as &lt;code&gt;// Now, [Parse](parse_text) the [raw text](raw_text)&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Currently, the only visual representation beyond the English descriptions is a table of values for blocks of &lt;code&gt;let&lt;/code&gt; definitions - which works nicely for things like defining 3D objects in a raytraced scene, a bit like an inspector in Unity or Blender. But I&#39;d imagine different view styles for different code structures and flow control types.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/visual-programming-definition-table.jpg&quot; alt=&quot;Visual programming definition table&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Where it stands&lt;/h2&gt;
&lt;p&gt;Right now, you can only browse - you can&#39;t edit existing code. The function linking via markdown-style links adds ambiguity: it&#39;s not always 100% clear whether a link refers to a specific function, so you have to hover to check. I think this is probably solvable, but it&#39;s not great yet. There&#39;s a lot of work in figuring out how language features map to visual building blocks, and I&#39;d only call it a success if the visual style is actually as clear to a programmer as traditional code - like for like, assuming both notations were new to them.&lt;/p&gt;
&lt;p&gt;I&#39;m also not sure how best to make use of an infinite canvas layout. Currently, code blocks just fill down vertically. I&#39;d love it if the user could drag and rearrange things to match their intuition, and save that layout. Even better: when generating code, the LLM or the system could figure out how to lay things out in 2D in a way that actively helps you understand the structure.&lt;/p&gt;
&lt;p&gt;I&#39;m genuinely curious: do other programmers think about their code spatially, or is that just me and my graphic design background leaking through? Either way, I do wonder whether we&#39;ll really be using plain text files forever. I&#39;d love to hear what you think (see my socials below).&lt;/p&gt;
</description>
      <pubDate>Fri, 27 Feb 2026 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/visual-programming/</guid>
      <dateString>27th February 2026</dateString>
    </item>
    <item>
      <title>Instant Actions added to Substage</title>
      <link>https://joethephish.me/blog/substage-instant-actions/</link>
      <description>&lt;p&gt;This is a feature I&#39;ve been wanting to add for a while, and I&#39;m really pleased with how it turned out: &lt;a href=&quot;https://substage.app/&quot;&gt;Substage&lt;/a&gt; now has &lt;strong&gt;Instant Actions&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For hundreds of the most common operations, Substage now skips the AI entirely and just &lt;em&gt;does the thing&lt;/em&gt;. Yes, bypassing the AI is a feature! It makes the app feel incredibly snappy for the stuff you do all the time.&lt;/p&gt;
&lt;p&gt;Type &lt;code&gt;jpg&lt;/code&gt; and your file converts. Type &lt;code&gt;zip&lt;/code&gt; and it&#39;s zipped. No waiting for a model to think about it. Just instant.&lt;/p&gt;
&lt;p&gt;Here&#39;s a real-time demo: no editing, no speedup:&lt;/p&gt;
&lt;video class=&quot;video&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
    &lt;source src=&quot;https://assets.selkie.design/substage/videos/instant-actions.mp4&quot; type=&quot;video/mp4&quot; /&gt;
&lt;/video&gt;
&lt;p&gt;Here&#39;s a taste of what works:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;File conversion:&lt;/strong&gt; &lt;code&gt;jpg&lt;/code&gt;, &lt;code&gt;png&lt;/code&gt;, &lt;code&gt;mp4&lt;/code&gt;, &lt;code&gt;plain text&lt;/code&gt;, &lt;code&gt;word doc&lt;/code&gt;, &lt;code&gt;make an animated gif&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;File info:&lt;/strong&gt; &lt;code&gt;word count&lt;/code&gt;, &lt;code&gt;lines of code&lt;/code&gt;, &lt;code&gt;file size&lt;/code&gt;, &lt;code&gt;resolution&lt;/code&gt;, &lt;code&gt;codec&lt;/code&gt;, &lt;code&gt;aspect ratio&lt;/code&gt;, &lt;code&gt;fps&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;File management:&lt;/strong&gt; &lt;code&gt;zip&lt;/code&gt;, &lt;code&gt;unzip&lt;/code&gt;, &lt;code&gt;trash&lt;/code&gt;, &lt;code&gt;duplicate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Selection:&lt;/strong&gt; &lt;code&gt;select all pdfs&lt;/code&gt;, &lt;code&gt;select first jpg&lt;/code&gt;, &lt;code&gt;select last word doc&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Quick actions:&lt;/strong&gt; &lt;code&gt;open in terminal&lt;/code&gt; (or just &lt;code&gt;term&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The thing is, before this update, typing &lt;code&gt;zip&lt;/code&gt; into Substage could feel a little silly. Was it really faster than right-clicking and selecting Compress? Well now I&#39;d argue it genuinely can be—and there&#39;s something satisfying about it too.&lt;/p&gt;
&lt;p&gt;I&#39;ve also shaved almost a full second off prompt processing times more generally, so even when you &lt;em&gt;do&lt;/em&gt; need AI, everything feels snappier—especially with lightweight models like GPT 4.1 Mini or Claude 4.5 Haiku.&lt;/p&gt;
&lt;p&gt;If there are commands you think should get the Instant Action treatment, let me know! You can reach me on &lt;a href=&quot;https://discord.gg/jgkwAv4H7M&quot;&gt;Discord&lt;/a&gt;, &lt;a href=&quot;https://substage.featurebase.app/&quot;&gt;Featurebase&lt;/a&gt;, or via &lt;a href=&quot;mailto:info@selkie.design&quot;&gt;email&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Fri, 16 Jan 2026 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/substage-instant-actions/</guid>
      <dateString>16th January 2026</dateString>
    </item>
    <item>
      <title>Live Activities Are Usually Half Asleep</title>
      <link>https://joethephish.me/blog/live-activities/</link>
      <description>&lt;video class=&quot;video&quot; controls=&quot;&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
  &lt;source src=&quot;https://assets.selkie.design/hour-by-hour/live-cat-activities.mp4&quot; type=&quot;video/mp4&quot; /&gt;
  Your browser does not support the video tag.
&lt;/video&gt;
&lt;p class=&quot;caption&quot;&gt;Fanta keeps me busy.&lt;/p&gt;
&lt;p&gt;Live Activities are a little less alive than I expected.&lt;/p&gt;
&lt;p&gt;I went back and forth on whether they were a good fit for my app, &lt;a href=&quot;https://selkie.design/hour-by-hour/&quot;&gt;Hour by Hour&lt;/a&gt;. The feature is powerful, but it has strong opinions and restrictions about how it wants to be used.&lt;/p&gt;
&lt;p&gt;Hour by Hour is a day planner that lets you sketch out a plan for your day. It&#39;s designed for busy days such as travel where you need to figure out timing, or for tracking a detailed schedule such as at a conference.&lt;/p&gt;
&lt;p&gt;Of course, Live Activities are a good fit for this. It just took longer than I expected to get there, mostly because Apple’s APIs are surprisingly limited in a few key ways.&lt;/p&gt;
&lt;h2&gt;Where things get awkward&lt;/h2&gt;
&lt;p&gt;Live Activities are clearly designed to be driven by the outside world. Taxis, flight tracking, food delivery. And this means they were designed with &lt;strong&gt;push delivery&lt;/strong&gt; in mind - from a server.&lt;/p&gt;
&lt;p&gt;Hour by Hour is not that kind of app. It is almost entirely local and time-based. The schedule usually does not change unless the user edits it. Running a server just to keep a local event “live” would be overkill and silly. (Also I hate server work, and am very bad at it.)&lt;/p&gt;
&lt;p&gt;This becomes even more stark when you compare Live Activities to notifications. The notifications API was designed around local scheduling first, before push notifications even existed. Time-based triggers are a core feature. Live Activities were not built that way, and it shows.&lt;/p&gt;
&lt;p&gt;Until iOS 26, a Live Activity could only be started &lt;em&gt;now&lt;/em&gt; or with a push from a server. No scheduling at all. iOS 26 &lt;a href=&quot;https://developer.apple.com/documentation/activitykit/activity/request(attributes:content:pushtype:style:alertconfiguration:start:)&quot;&gt;finally adds the ability to schedule a start&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-swift&quot;&gt;static func request(
    attributes: Attributes,
    content: ActivityContent&amp;lt;Activity&amp;lt;Attributes&amp;gt;.ContentState&amp;gt;,
    pushType: PushType? = nil,
    style: ActivityStyle,
    alertConfiguration: AlertConfiguration,
    start: Date
) throws -&amp;gt; Activity&amp;lt;Attributes&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That helps. You can now say “start this at 2pm”.&lt;/p&gt;
&lt;p&gt;What you &lt;em&gt;still&lt;/em&gt; cannot say is “and make this update at 2:30” or “and finish at 3pm”. Live Activities cannot advance based on a timetable. They can only update while the app is open, or when the user triggers an App Intent by interacting with it directly.&lt;/p&gt;
&lt;p&gt;You can remove a Live Activity programmatically once you decide it is finished, but only if the app is in the foreground. You just cannot schedule that finishing point in advance.&lt;/p&gt;
&lt;p&gt;Interactivity exists via App Intents, but it is slow. Pressing a button takes around two seconds to react, even on an iPhone 17 Pro. There is also no control over animation timing, which makes every interaction feel heavier and even slower than it ought to.&lt;/p&gt;
&lt;p&gt;There is a slight philosophical mismatch for my specific app, too. Hour by Hour normally progresses automatically based on time. You do not have to tick things off for the day to move forward. Live Activities want explicit interaction.&lt;/p&gt;
&lt;h2&gt;The compromise, and why it still works&lt;/h2&gt;
&lt;p&gt;In the end, I leaned fully into App Intents, and it&#39;s basically fine.&lt;/p&gt;
&lt;p&gt;In the Live Activity, you tick off events as you go. Once everything is checked off, the activity marks itself as finished and removes itself. It is not quite the model I would choose in a vacuum, but it is predictable, fully offline, and easy to reason about.&lt;/p&gt;
&lt;p&gt;It also has an upside. The Live Activity becomes something you actively engage with, rather than a passive progress bar you ignore. Clearing the final item and watching it disappear feels quietly satisfying.&lt;/p&gt;
&lt;p&gt;Yes, it is slower than I would like. Yes, I wish it was more responsive. But within Apple’s constraints, it is a compromise I am happy with.&lt;/p&gt;
&lt;p&gt;And overall, I think it adds a lot.&lt;/p&gt;
&lt;p&gt;Being able to glance at your Lock Screen and see exactly where you are in your day, using a structure you sketched yourself, feels genuinely useful, personal and immediate.&lt;/p&gt;
&lt;p&gt;Live Activities might be half asleep, especially when they are running entirely locally. But even half awake, they turn out to be a good companion for a real, messy, human (or cat) day.&lt;/p&gt;
</description>
      <pubDate>Sat, 27 Dec 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/live-activities/</guid>
      <dateString>27th December 2025</dateString>
    </item>
    <item>
      <title>Godot’s Scene System Is Just Brilliant</title>
      <link>https://joethephish.me/blog/godot-scene-system/</link>
      <description>&lt;p&gt;At &lt;a href=&quot;https://www.inklestudios.com/&quot;&gt;inkle&lt;/a&gt;, we&#39;re currently making our &lt;a href=&quot;https://www.inklestudios.com/tr-49&quot;&gt;first game&lt;/a&gt; in Godot after using Unity for just under 10 years. So far, I couldn&#39;t be happier!&lt;/p&gt;
&lt;p&gt;I mentioned &lt;a href=&quot;https://bsky.app/profile/joe.inkle.co/post/3m63hiq7hes2b&quot;&gt;on Bluesky&lt;/a&gt; that I was thinking of writing a post about a few of my favourite Godot features. For now, I only want to talk about one of them, because it&#39;s hands down my absolute favourite. It&#39;s so fundamental to Godot, and I just can&#39;t get over how good it is.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/godot-scene-screenshot.jpg&quot; alt=&quot;Godot Scene screenshot&quot; /&gt;&lt;/p&gt;
&lt;p class=&quot;caption&quot;&gt;Our upcoming game &lt;a href=&quot;https://www.inklestudios.com/tr-49&quot;&gt;TR-49&lt;/a&gt; in the Godot editor, with the notebook scene open.&lt;/p&gt;
&lt;p&gt;Compared to Unity, the scene system just makes far more sense than the whole scene and prefab tangle that Unity ended up with. Especially once nested prefabs arrived and the whole thing grew a second head. Godot feels like someone stepped back, looked at the mess, went for a long walk, and came back with something simpler and much more powerful.&lt;/p&gt;
&lt;h2&gt;How scenes actually work&lt;/h2&gt;
&lt;p&gt;In Godot, a scene is just a hierarchy of nodes that can contain anything from an entire level to a tiny UI element. You can instance scenes inside scenes and nest them as deeply as you like. It is modularity baked right into the structure of the engine.&lt;/p&gt;
&lt;p&gt;Before I had even touched Godot, this was one of the features I was most looking forward to. It didn’t disappoint, and it even had a few surprises up its sleeve.&lt;/p&gt;
&lt;h2&gt;The UX twist I didn’t expect: tabs&lt;/h2&gt;
&lt;p&gt;What I hadn’t expected was the UX paradigm: scenes are treated like independent documents that are opened in their own tabs, exactly like having multiple files open in your code editor, or multiple PSDs open in Photoshop. For the first few minutes this threw me slightly, but when it clicked it seemed totally obvious.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/godot-scene-tabs.jpg&quot; alt=&quot;Godot tabs&quot; /&gt;&lt;/p&gt;
&lt;p&gt;You can flip between several scenes and their hierarchies at once, editing different parts of your project simultaneously. If you&#39;ve used tabs in any creative tool ever, you won&#39;t need me to explain that to you, though it does throw Unity&#39;s approach into stark relief: its approach of only loading one monolithic scene at a time (or multiple additively), with a weird and fragile prefab mode, now seems absolutely wild.&lt;/p&gt;
&lt;h2&gt;Nesting scenes is simple by default&lt;/h2&gt;
&lt;p&gt;When you nest one scene in another, you don&#39;t see its children by default. Instead of the implementation details of that scene exploding all over your parent hierarchy, you just see a single, clean node where it&#39;s been instanced. It keeps the parent scene readable while still letting you package up a whole bunch of complexity inside: a chunk of UI, a reusable interaction, or even a little collection of pick‑ups. Most of the time, I just want to split up a hierarchy so it&#39;s easier to think about, not start customising every single instance.&lt;/p&gt;
&lt;p&gt;This ends up being a better approach to prefabs than Unity&#39;s prefabs themselves, because the instance behaves like a tidy, self‑contained unit rather than a little explosion of overrides waiting to happen.&lt;/p&gt;
&lt;h2&gt;Testing scenes in isolation&lt;/h2&gt;
&lt;p&gt;Another surprise for me: there is a separate play button for playing the current scene, whichever scene tab is active. And that scene can be anything. A whole level, a tiny component, a random bit of UI floating in space.&lt;/p&gt;
&lt;p&gt;This is fantastic for testing features in isolation. In our latest game, we have a notebook overlay, and it was trivial to load and test that as its own scene.&lt;/p&gt;
&lt;p&gt;It might take you a bit of time to figure out how large or small your scenes should be. I definitely went a bit too fine-grained at first. But once you get your head around it, you end up flying. It&#39;s a system that rewards a bit of thought up front and pays you back every single day afterwards.&lt;/p&gt;
&lt;p&gt;Godot has lots of features I really like, but the scene system is the one that makes the whole engine feel fresh and modern. It&#39;s simple and powerful. And once you get used to it, returning to the alternative feels a bit like using a web browser that&#39;s banned tabs and the back button.&lt;/p&gt;
</description>
      <pubDate>Fri, 28 Nov 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/godot-scene-system/</guid>
      <dateString>28th November 2025</dateString>
    </item>
    <item>
      <title>Getting Apple’s tiny on-device Foundational Model to pick SF Symbols in Hour by Hour</title>
      <link>https://joethephish.me/blog/apple-foundation-model-icon-picking/</link>
      <description>&lt;p&gt;I’ve been experimenting with Apple’s on-device LLM, and most of what I’ve seen from it has been… absolute nonsense. But, with a cunning trick I got it to achieve greatness.&lt;/p&gt;
&lt;p&gt;It is astonishingly dim a lot of the time and will cheerfully hand you rubbish with complete confidence. But that&#39;s honestly to be expected - it&#39;s a 3-billion parameter model (that err, acts like a 1b model). But I still wanted to see if I could bend it to my will and get something properly useful out of it for my upcoming day planning app &lt;a href=&quot;https://selkie.design/hour-by-hour&quot;&gt;Hour by Hour&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;My first attempt was to try the most obvious thing: Give it the event title and ask for an SF Symbol. That went about as well as you&#39;d expect. Even large cloud models like GPT-5 hallucinate SF Symbols that do not exist or pick something wildly off target. So, this was entirely expected.&lt;/p&gt;
&lt;p&gt;But I did have a modicum of hope that if I gave it a list of only 12 SF Symbols to choose from, it might do the right thing. Right? Right???&lt;/p&gt;
&lt;blockquote class=&quot;mastodon-embed&quot; data-embed-url=&quot;https://mastodon.gamedev.place/@joethephish/115458321705909040/embed&quot; style=&quot;background: #FCF8FF; border-radius: 8px; border: 1px solid #C9C4DA; margin: 0; max-width: 540px; min-width: 270px; overflow: hidden; padding: 0;&quot;&gt; &lt;a href=&quot;https://mastodon.gamedev.place/@joethephish/115458321705909040&quot; target=&quot;_blank&quot; style=&quot;align-items: center; color: #1C1A25; display: flex; flex-direction: column; font-family: system-ui, -apple-system, BlinkMacSystemFont, &#39;Segoe UI&#39;, Oxygen, Ubuntu, Cantarell, &#39;Fira Sans&#39;, &#39;Droid Sans&#39;, &#39;Helvetica Neue&#39;, Roboto, sans-serif; font-size: 14px; justify-content: center; letter-spacing: 0.25px; line-height: 20px; padding: 24px; text-decoration: none;&quot;&gt; &lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; xmlns:xlink=&quot;http://www.w3.org/1999/xlink&quot; width=&quot;32&quot; height=&quot;32&quot; viewBox=&quot;0 0 79 75&quot;&gt;&lt;path d=&quot;M63 45.3v-20c0-4.1-1-7.3-3.2-9.7-2.1-2.4-5-3.7-8.5-3.7-4.1 0-7.2 1.6-9.3 4.7l-2 3.3-2-3.3c-2-3.1-5.1-4.7-9.2-4.7-3.5 0-6.4 1.3-8.6 3.7-2.1 2.4-3.1 5.6-3.1 9.7v20h8V25.9c0-4.1 1.7-6.2 5.2-6.2 3.8 0 5.8 2.5 5.8 7.4V37.7H44V27.1c0-4.9 1.9-7.4 5.8-7.4 3.5 0 5.2 2.1 5.2 6.2V45.3h8ZM74.7 16.6c.6 6 .1 15.7.1 17.3 0 .5-.1 4.8-.1 5.3-.7 11.5-8 16-15.6 17.5-.1 0-.2 0-.3 0-4.9 1-10 1.2-14.9 1.4-1.2 0-2.4 0-3.6 0-4.8 0-9.7-.6-14.4-1.7-.1 0-.1 0-.1 0s-.1 0-.1 0 0 .1 0 .1 0 0 0 0c.1 1.6.4 3.1 1 4.5.6 1.7 2.9 5.7 11.4 5.7 5 0 9.9-.6 14.8-1.7 0 0 0 0 0 0 .1 0 .1 0 .1 0 0 .1 0 .1 0 .1.1 0 .1 0 .1.1v5.6s0 .1-.1.1c0 0 0 0 0 .1-1.6 1.1-3.7 1.7-5.6 2.3-.8.3-1.6.5-2.4.7-7.5 1.7-15.4 1.3-22.7-1.2-6.8-2.4-13.8-8.2-15.5-15.2-.9-3.8-1.6-7.6-1.9-11.5-.6-5.8-.6-11.7-.8-17.5C3.9 24.5 4 20 4.9 16 6.7 7.9 14.1 2.2 22.3 1c1.4-.2 4.1-1 16.5-1h.1C51.4 0 56.7.8 58.1 1c8.4 1.2 15.5 7.5 16.6 15.6Z&quot; fill=&quot;currentColor&quot;&gt;&lt;/path&gt;&lt;/svg&gt; &lt;div style=&quot;color: #787588; margin-top: 16px;&quot;&gt;Post by @joethephish@mastodon.gamedev.place&lt;/div&gt; &lt;div style=&quot;font-weight: 500;&quot;&gt;View on Mastodon&lt;/div&gt; &lt;/a&gt; &lt;/blockquote&gt; &lt;script data-allowed-prefixes=&quot;https://mastodon.gamedev.place/&quot; async=&quot;&quot; src=&quot;https://mastodon.gamedev.place/embed.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;(One thing that genuinely helped in the early stages was the fact that the Shortcuts app has Apple Intelligence actions. It’s a great place to prototype prompts quickly without wiring anything into my code. I could tweak phrasing, see how the model behaved, and very quickly learn what it was hopeless at and what it could just about handle. Most of my dead ends and small breakthroughs happened in Shortcuts long before I wrote a line of Swift.)&lt;/p&gt;
&lt;p&gt;I realised that it doesn&#39;t actually know what SF Symbols look like, or the concepts that they&#39;re intended to represent. So I tried changing the multiple choice to a semantic word list, and numbered each - which I would then turn into an SF Symbol. I also found that asking it to start by writing a brief description of the event helped as a sort of short &amp;quot;thinking phase&amp;quot; to expand on what a terse event title might refer to:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/apple-intelligence-in-shortcuts.png&quot; alt=&quot;Using On-Device model in Shortcuts app&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This worked... more than 50% of the time? But I want to hit at least ~95% accuracy, not 60%. Also, this is a very short list of possible symbols.&lt;/p&gt;
&lt;p&gt;By the time you get to around 4-6 choices it&#39;s pretty reliable. So I considered splitting SF Symbols into nested tiers of categories, effectively having it navigate through menus to find the right symbol. It may have worked in theory but was far too slow and fragile. If each step takes about a second you end up waiting several seconds just to get an icon, and the categorisation itself is never perfectly clean.&lt;/p&gt;
&lt;h2&gt;🤓 Emojis are always the answer 🤯&lt;/h2&gt;
&lt;p&gt;So I took a step back and thought about what LLMs are trained on - the internet. &lt;strong&gt;Emoji&lt;/strong&gt;. They swim in emoji. There is a mountain of training data online where people use emoji contextually with everyday language. Emoji are also short to produce which makes the prompts fast.&lt;/p&gt;
&lt;p&gt;The new idea was simple. Ask the model for one emoji that represents the event title. Nothing else. No commentary. No fuss. That single change fixed almost everything. The prompt is tiny, the output is tiny, and the model is very confident when choosing emoji.&lt;/p&gt;
&lt;p&gt;But I didn&#39;t actually want to use emoji - they weren&#39;t the tone or style I was going for - I wanted to use Apple&#39;s elegant vector SF Symbols. So I built what is now the secret sauce that makes the whole feature work. A giant dictionary that maps emoji to SF Symbols. In practice I author it backwards: as a limited number of SF Symbol names that map to a long string of possible emoji, which I then create a reverse mapping for on load. When the model produces an emoji, I look it up in the dictionary and convert it to the SF Symbol it belongs to:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-swift&quot;&gt;let sfSymbolToEmojiMapping = [
    (&amp;quot;fork.knife&amp;quot;, &amp;quot;🍽️🍴🥄🍕🍔🍟🥗🥪🍣🍜🍝🌮🌯🥘🍲🍱🥙🌭🥟🥠🥡🥨🥯🥞🧇🧀🍖🍗🥩🥓🧈🥐🥖🫓🍞🥢🍳👩‍🍳&amp;quot;),
    ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So if your event title contains “eat tacos”, the model will likely produce the taco emoji which then maps neatly to &lt;code&gt;fork.knife&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Most of the time this gives lovely results. Occasionally it does something ridiculous. “Eat breakfast” once came back as an 🥑 avocado emoji (?!) That then mapped to &lt;code&gt;carrot.fill&lt;/code&gt; because I included all the vegetable and healthy eating emoji under that symbol. So breakfast briefly became a carrot. Fair enough. Breakfast is what you make of it.&lt;/p&gt;
&lt;video class=&quot;video&quot; autoplay=&quot;&quot; controls=&quot;&quot; playsinline=&quot;&quot;&gt;
  &lt;source src=&quot;https://assets.selkie.design/hour-by-hour/hour-by-hour-icon-picking.mp4&quot; type=&quot;video/mp4&quot; /&gt;
  Your browser does not support the video tag.
&lt;/video&gt;
&lt;p class=&quot;caption&quot;&gt;Known bugs – &amp;quot;Start work&amp;quot; should really have something like a briefcase, and it shouldn&#39;t default to having the previous icon for newly created events&lt;/p&gt;
&lt;p&gt;The best part is that this is incredibly fast. The on-device model only gets a small prompt, and there is no heavy back and forth with the cloud. It behaves well, feels instant, and makes Hour by Hour nicer to use without draining power or requiring a connection.&lt;/p&gt;
&lt;p&gt;I wanted Apple Intelligence to feel genuinely helpful and not like a gimmick. With this emoji trick it finally clicked into place. A tiny model can do something surprisingly clever if you ask it for the right thing.&lt;/p&gt;
</description>
      <pubDate>Fri, 14 Nov 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/apple-foundation-model-icon-picking/</guid>
      <dateString>14th November 2025</dateString>
    </item>
    <item>
      <title>The Selkie Design blog is now Joe&#39;s blog</title>
      <link>https://joethephish.me/blog/blog-migration/</link>
      <description>&lt;p&gt;Quick update: I&#39;ve moved my blog from &lt;a href=&quot;https://selkie.design/&quot;&gt;Selkie.Design&lt;/a&gt; over to here!&lt;/p&gt;
&lt;p&gt;The main reason is that I wanted somewhere I could write about anything — whether it&#39;s &lt;a href=&quot;https://www.inklestudios.com/&quot;&gt;inkle&lt;/a&gt; and game development stuff, or &lt;a href=&quot;https://selkie.design/&quot;&gt;Selkie Design&lt;/a&gt; and Apple platform things. It felt weird having a blog on the Selkie site when half my thoughts weren&#39;t really Selkie-related.&lt;/p&gt;
&lt;p&gt;I&#39;ve been particularly excited to get back into &lt;a href=&quot;https://godotengine.org/&quot;&gt;Godot&lt;/a&gt; lately, and I&#39;ve got some thoughts brewing about that. More to come soon!&lt;/p&gt;
&lt;p&gt;All the old posts are still here of course, and if you were subscribed to the old feed, you might want to grab the &lt;a href=&quot;https://joethephish.me/feed.xml&quot;&gt;new RSS feed&lt;/a&gt; instead, though hopefully all the redirects will work!&lt;/p&gt;
&lt;p&gt;Cheers!&lt;/p&gt;
</description>
      <pubDate>Tue, 14 Oct 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/blog-migration/</guid>
      <dateString>14th October 2025</dateString>
    </item>
    <item>
      <title>Substage Predicts...</title>
      <link>https://joethephish.me/blog/substage-predicts/</link>
      <description>&lt;p&gt;I might have gone a bit overboard on this one: I built a Terminal command prediction engine for &lt;a href=&quot;https://joethephish.me/substage&quot;&gt;Substage&lt;/a&gt;, and to make it work, I ended up creating my own simulator for &lt;a href=&quot;https://en.wikipedia.org/wiki/Bash_(Unix_shell)&quot;&gt;bash&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Before any command gets run, Substage now parses and simulates it, then shows you a visual representation of what’s about to happen to your files (and more). It&#39;s &lt;strong&gt;literally running the command in my own bash implementation&lt;/strong&gt;, and seeing what the files would be changed as a result of running it for real:&lt;/p&gt;
&lt;video class=&quot;video&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
    &lt;source src=&quot;https://assets.selkie.design/substage/videos/command-summary-demo.mp4&quot; type=&quot;video/mp4&quot; /&gt;
&lt;/video&gt;
&lt;p&gt;If you know anything about the command line, you know how easy it is for small mistakes to cause headaches. After hearing from a few users who ran commands in Substage that err... didn’t quite do what they expected, it seemed worth putting some additional visualisation in to see what Substage is about to do. So yes, I ended up writing my own little version of bash (and a handful of standard commands) just to work out what a command will do before it actually does anything.&lt;/p&gt;
&lt;p&gt;For example, if you’re about to batch rename a folder full of files, Substage will show exactly which filenames would change, if any files would get overwritten, and whether any new files would be created or deleted. I wanted it to be as visually clear as a GUI app for git, such as the excellent &lt;a href=&quot;https://git-fork.com/&quot;&gt;Fork&lt;/a&gt;. I also keep track of any side effects - such as changing system settings. Here&#39;s an example of requesting a video to be converted, and explicitly asking it to delete the original:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/substage-predicts.jpg&quot; alt=&quot;Substage predicting command effects&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So how does it work? Substage fully parses the most common syntax you&#39;re likely to find in one-liner commands and also attempts to evaluate it as fully as reasonably possible. When it gets down to the individual command level, not every command is treated the same way. For simple, “harmless” commands, like checking a file’s metadata or listing files, can be run directly, since they don’t modify anything on disk. But for commands that could change or delete files, or do anything risky or long-running, Substage tries to model the effects without touching your real files.&lt;/p&gt;
&lt;p&gt;What’s interesting is that the hardest commands to simulate aren’t always the ones you’d expect. Tools like ffmpeg look intimidating, but their input/output structure is actually quite predictable. Meanwhile, seemingly simple commands like &lt;code&gt;cp&lt;/code&gt; (the copy command) have a huge number of options and edge cases, making them much trickier to handle. Most Substage workflows tend to use a fairly small set of tools (&lt;code&gt;ffmpeg&lt;/code&gt;, &lt;code&gt;sips&lt;/code&gt;, &lt;code&gt;mv&lt;/code&gt;, etc.), so Substage mostly focuses on supporting those especially well.&lt;/p&gt;
&lt;p&gt;It&#39;s not necessary to perfectly cover every single command for it to be useful. Most real-world usage in Substage relies on a relatively small set of common tools and patterns, so even partial coverage already catches a lot of issues and gives valuable visibility. Over time, I can keep expanding and improving Substage&#39;s understanding, adding support for more commands and edge cases as they come up. Since the vast majority of use cases are covered by the commands we support, Substage shows a strong warning when running an unrecognised command, making it clear to the user when they’re in less-charted territory. But let me know, either on &lt;a href=&quot;https://substage.featurebase.app/&quot;&gt;Featurebase&lt;/a&gt;, via &lt;a href=&quot;mailto:info@selkie.design&quot;&gt;email&lt;/a&gt; or &lt;a href=&quot;https://discord.gg/jgkwAv4H7M&quot;&gt;Discord&lt;/a&gt; if there&#39;s a command missing that you&#39;d like to see supported!&lt;/p&gt;
&lt;p&gt;Another benefit of this approach is that we can be a lot more precise about evaluating the risk of a command and allowing you to choose what commands can be run automatically without confirmation being necessary, so we have this new settings page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/substage-auto-run-settings.jpg&quot; alt=&quot;Substage auto-run settings&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For me, Substage is about making powerful tools more accessible, without losing any of the flexibility or control that makes the command line so useful. I’ve always preferred visual tools to the command line (hence Substage generally, and I use GUI git apps for similar reasons). I want Substage to be accessible for beginners but still efficient for power users. By simulating commands and visualising their effects, Substage helps take away a lot of the guesswork with understanding the effects of a generated bash command, especially when a rogue rename can cause data loss. But it’s not just about safety; it’s about understanding what’s going to happen, and feeling more in control.&lt;/p&gt;
&lt;p&gt;Building this (even a partial) bash simulator was one of the most ambitious things I’ve done for Substage. The update is out now - give it a try in the latest &lt;a href=&quot;https://joethephish.me/substage&quot;&gt;Substage&lt;/a&gt; release!&lt;/p&gt;
</description>
      <pubDate>Thu, 31 Jul 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/substage-predicts/</guid>
      <dateString>31st July 2025</dateString>
    </item>
    <item>
      <title>Substage ❤️ Setapp</title>
      <link>https://joethephish.me/blog/substage-on-setapp/</link>
      <description>&lt;p&gt;Good news, &lt;a href=&quot;https://joethephish.me/substage&quot;&gt;Substage&lt;/a&gt; is now on &lt;a href=&quot;https://setapp.com/apps/substage?refAppID=764&amp;amp;utm_medium=vendor_program&quot;&gt;Setapp&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;If you’ve already got a Setapp subscription, you can grab &lt;strong&gt;Substage&lt;/strong&gt; &lt;a href=&quot;https://setapp.com/apps/substage?refAppID=764&amp;amp;utm_medium=vendor_program&quot;&gt;right now for free&lt;/a&gt;. If not, this might be a nice excuse to give Setapp a try; I’ve always loved the idea of it, especially for smaller indie apps like mine where a subscription might feel like a bit much.&lt;/p&gt;
&lt;p&gt;If you don&#39;t know it, Setapp is a curated collection of Mac apps you get access to with a single subscription. They’re pretty selective about what they include, so I was really pleased when they chose to accept Substage.&lt;/p&gt;
&lt;div&gt;
    &lt;setapp-custom-banner iconUrl=&quot;https://store.setapp.com/app/764/main/icon-682c9ed88ca9e.png&quot; appName=&quot;Substage&quot; appId=&quot;764&quot; vendorId=&quot;464&quot;&gt;&lt;/setapp-custom-banner&gt;
&lt;p&gt;&lt;script type=&quot;text/javascript&quot; src=&quot;https://developer.setapp.com/setapp-banner/index.js&quot; async=&quot;&quot;&gt;&lt;/script&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Even better, they gave me some super thoughtful feedback during the review process, stuff that actually led to improvements in the app itself, and that benefits everyone: I&#39;ve added a &lt;strong&gt;brand new feature&lt;/strong&gt; in the latest release of Substage, directly inspired by Setapp&#39;s feedback: full Spotlight search. You can now ask Substage things like “show me all jpg images created yesterday” and it’ll use Spotlight behind the scenes to find the right files:&lt;/p&gt;
&lt;video class=&quot;video&quot; controls=&quot;&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
  &lt;source src=&quot;https://assets.selkie.design/substage/videos/mdfind-demo.mp4&quot; type=&quot;video/mp4&quot; /&gt;
  Your browser does not support the video tag.
&lt;/video&gt;
&lt;p&gt;So far, I&#39;ve found that most people prefer to go for the &amp;quot;Bring Your Own AI&amp;quot; option over the subscription version, and I totally get that - it appeals most to a technical audience who might have access to their own API keys already, and people do get a bit weary of too many subscriptions. What&#39;s cool about Setapp is that you can get the benefit of bundling all your apps into a single subscription to keep things simple. It also lets you try out more apps without doing the whole “is this worth it?” calculation every time.&lt;/p&gt;
&lt;p&gt;The version of Substage on Setapp includes GPT-4.1 Mini by default, which I’ve found to be a great all-rounder—quick, reliable, and good enough for almost everything I throw at it. But if you’re the kind of person who likes having more control (or already has your own API keys), you can still plug those in too. Just like the other editions of Substage, the Setapp version supports “Bring Your Own AI” as well.&lt;/p&gt;
&lt;p&gt;That&#39;s it for now! By the way, I’m always keen to hear how people are using it (or what’s not working!) Come say hi in the &lt;a href=&quot;https://discord.gg/jgkwAv4H7M&quot;&gt;Discord&lt;/a&gt; if you’ve got questions, feedback, or just want to compare AI models.&lt;/p&gt;
</description>
      <pubDate>Wed, 02 Jul 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/substage-on-setapp/</guid>
      <dateString>2nd July 2025</dateString>
    </item>
    <item>
      <title>The Shortcut to integrating Private Cloud Compute into my app</title>
      <link>https://joethephish.me/blog/the-shortcut-to-integrating-PCC/</link>
      <description>&lt;p&gt;Apple’s &lt;a href=&quot;https://security.apple.com/documentation/private-cloud-compute&quot;&gt;Private Cloud Compute&lt;/a&gt; is pretty cool - it lets you use Apple’s cloud LLMs with strong privacy guarantees, and it&#39;s a much more capable LLM than their on-device models. Best of all? So far, it seems to be completely free.&lt;/p&gt;
&lt;p&gt;But if you’re a developer, you might have noticed something odd: &lt;strong&gt;there’s no public API for Private Cloud Compute&lt;/strong&gt;, and this is a shame given that their new Apple Foundation Model API is fantastic, and super well designed. There&#39;s no way to integrate it into your app, or even use it from the command line. Or is there...?&lt;/p&gt;
&lt;h2&gt;The Shortcut (literally)&lt;/h2&gt;
&lt;p&gt;Here’s the twist: &lt;strong&gt;Private Cloud Compute &lt;em&gt;is&lt;/em&gt; available to users via Shortcuts&lt;/strong&gt;. That means, if you&#39;re a Mac app developer, then with a little creativity, you can actually &lt;em&gt;call&lt;/em&gt; PCC from your own code, by wrapping it up in a Shortcut and invoking it from your app. (I&#39;m curious, is there a way to do this on iOS? Do apps have the ability to call arbitrary Shortcuts...?)&lt;/p&gt;
&lt;p&gt;I’m not a Shortcuts power user by any means, but after a bit of tinkering, I managed to put together a super simple Shortcut that takes a prompt as input, runs it through Private Cloud Compute, and outputs the result. Here’s what it looks like:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/pcc-shortcut-screenshot.jpg&quot; alt=&quot;Screenshot of the simple Private Cloud Compute Shortcut&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The Shortcut itself is just a couple of actions: it takes the input, passes it to the “Use Model” action with the PCC model selected, and returns the result. That’s it!&lt;/p&gt;
&lt;h2&gt;Calling Shortcuts from Swift&lt;/h2&gt;
&lt;p&gt;Now, how do you call this Shortcut from your own app? Turns out, macOS ships with a handy command-line tool: &lt;code&gt;/usr/bin/shortcuts&lt;/code&gt;. You can use it to run any Shortcut, pass in input, and capture the output.&lt;/p&gt;
&lt;p&gt;So, in my Swift app I simply used &lt;code&gt;Process&lt;/code&gt; to run the shortcuts command-line tool and then returned the output, something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Swift&quot;&gt;    let process = Process()
    process.executableURL = URL(fileURLWithPath: &amp;quot;/usr/bin/shortcuts&amp;quot;)
    process.arguments = [
        &amp;quot;run&amp;quot;,
        &amp;quot;Substage-PCC&amp;quot;,
        &amp;quot;-o&amp;quot;, pccOutputPath
    ]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This approach lets me effectively use Private Cloud Compute as an AI model within &lt;a href=&quot;https://joethephish.me/substage&quot;&gt;Substage&lt;/a&gt;. I’ve been experimenting with it, and &lt;em&gt;so far&lt;/em&gt;, its performance feels comparable to an average 8 billion parameter open source model. Fine for general tasks, in my experience it doesn’t quite match the coding abilities of specialized models like Qwen Coder 7b. That said, I&#39;m currently reusing the same prompt that I use for other models, and maybe it could do with some tweaks to improve its accuracy. It’s also noticeably slower and less capable than something like GPT-4.1 Mini.&lt;/p&gt;
&lt;p&gt;But there’s a big upside: for the user, it’s not just cheap - it’s completely free, with no tokens to buy or API keys to manage. (Presumably though, Apple must have put in some fair use restrictions somewhere?)&lt;/p&gt;
&lt;p&gt;From a user’s perspective, the setup won&#39;t be entirely seamless though. To get started, they’ll need to agree to add the Shortcut to the Shortcuts app (and err, not mess with it?), and the first time it runs, macOS will likely prompt them to grant permission for automation. It’s a couple of extra steps, but once set up, it works reliably in the background without needing to be prompted again.&lt;/p&gt;
&lt;p&gt;Overall, while it’s not a drop-in replacement for the fastest or most capable cloud models, this method is a fun way to tap into Apple’s privacy-preserving AI for your own workflows... at least until Apple adds support officially via API. I think I&#39;ll hold back from doing more work to integrate it until I can be sure that the user experience can be streamlined, or at least closer to September when macOS Tahoe is released, since maybe they&#39;ll add an official API by then.&lt;/p&gt;
&lt;p&gt;If you try this approach, let me know how it works for you, or if you find any clever ways to improve the user experience!&lt;/p&gt;
</description>
      <pubDate>Fri, 20 Jun 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/the-shortcut-to-integrating-PCC/</guid>
      <dateString>20th June 2025</dateString>
    </item>
    <item>
      <title>Substage update: Bring Your Own AI &amp; One-off purchase out now!</title>
      <link>https://joethephish.me/blog/bring-your-own-ai/</link>
      <description>&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/ai-model-settings.jpg&quot; alt=&quot;Substage AI Model Settings&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Good news! Substage now supports &lt;strong&gt;custom API keys&lt;/strong&gt; and &lt;strong&gt;local LLM models&lt;/strong&gt;, and a &lt;strong&gt;one-off purchase option&lt;/strong&gt; is out now, if you want to &lt;strong&gt;Bring Your Own AI&lt;/strong&gt;! That means you can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use your own API key for OpenAI, Anthropic, Google or Mistral&lt;/li&gt;
&lt;li&gt;Run local LLMs via &lt;a href=&quot;https://lmstudio.ai/&quot;&gt;LM Studio&lt;/a&gt; or &lt;a href=&quot;https://ollama.com/&quot;&gt;Ollama&lt;/a&gt; — right on your own machine&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;one-off purchase option&lt;/strong&gt; is only available in the latest version of Substage, so be sure to &lt;a href=&quot;https://joethephish.me/substage/download/&quot;&gt;grab the update&lt;/a&gt; if you decide to pick it up!&lt;/p&gt;
&lt;p&gt;Also, in this new version: I&#39;m re-enabling support for Google Gemini. I was previously uncomfortable with their privacy stance, as it varied depending on payment tier. After reviewing it more carefully and ensuring everything was properly configured on my end, I&#39;m comfortable re-including it. Additionally, with users now able to use their own API keys, I wanted to ensure I was meeting people where they were at.&lt;/p&gt;
&lt;p&gt;If you already have a previous version of Substage installed, click its icon in the menu bar and choose Check For Updates. Otherwise, download the &lt;a href=&quot;https://joethephish.me/substage/download/&quot;&gt;latest version here!&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I’ve also set up a brand new &lt;a href=&quot;https://discord.gg/jgkwAv4H7M&quot;&gt;Discord server&lt;/a&gt;. If you’ve got questions, ideas, or want to share what models are working well for you, that’s the place!&lt;/p&gt;
&lt;h2&gt;How to get started&lt;/h2&gt;
&lt;h3&gt;Using your own API key&lt;/h3&gt;
&lt;p&gt;Click the Substage settings button, head to the new &lt;strong&gt;AI Models&lt;/strong&gt; tab, pick a model (like GPT-4o), and paste in your API key. That key is saved for the provider, so if you later pick another OpenAI model—like GPT-4o-mini—it’ll use the same one.&lt;/p&gt;
&lt;p&gt;By the way, if you haven&#39;t tried Mistral yet, I can highly recommend them - their models are really fast and accurate.&lt;/p&gt;
&lt;h3&gt;Running a local LLM&lt;/h3&gt;
&lt;p&gt;You’ll need either &lt;a href=&quot;https://lmstudio.ai/&quot;&gt;LM Studio&lt;/a&gt; or &lt;a href=&quot;https://ollama.com/&quot;&gt;Ollama&lt;/a&gt;. If you haven’t used either, LM Studio is probably the easier way in — it’s got a solid UI and good setup defaults.&lt;/p&gt;
&lt;p&gt;Once it’s running and you&#39;ve got a model downloaded, click Substage&#39;s Settings button, and browse to the AI Models tab. Click the &amp;quot;+&amp;quot; button and selected &amp;quot;Add new custom model&amp;quot;. Fill in the model name (such as &lt;code&gt;meta-llama-3.1-8b-instruct&lt;/code&gt;) and base URL. For LM Studio that&#39;s &lt;code&gt;http://localhost:1234&lt;/code&gt;, and for Ollama that&#39;s &lt;code&gt;http://localhost:11434&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I&#39;ve tested a few models and would suggest using something in the 7–8B parameter range or larger if your Mac can handle it. &lt;strong&gt;Qwen 2.5 Coder&lt;/strong&gt; has been working well for me on my 5 year old M1 iMac. I also recommend reducing the context length for speed. Around 1000 tokens works well. In LM Studio, you can do this by selecting the model in the main model list, clicking the cog settings button, and adjusting the &amp;quot;Context Length&amp;quot; parameter. I wouldn&#39;t recommend reasoning models such as DeepSeek R1, since the reasoning prevents Substage from feeling snappy, and I don&#39;t believe the thinking time helps, especially.&lt;/p&gt;
&lt;p&gt;If you&#39;d rather not mess with separate tools, I’m considering building in a simple way to download and use recommended models straight from the app. If that sounds good, &lt;a href=&quot;https://substage.featurebase.app/&quot;&gt;let me know&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Behind the scenes&lt;/h2&gt;
&lt;p&gt;This release needed a fair bit of reworking under the hood—especially to support local models cleanly.&lt;/p&gt;
&lt;p&gt;Prompting had to be redesigned. Previously, Substage used a long example-laden prompt to teach the model how to behave. That was fine for big cloud models, but local ones couldn’t handle the length. So now, prompts include just a few targeted examples, chosen dynamically based on what you type and what kind of files you’ve selected. If you ask it to convert a video to mp4, for instance, Substage quietly drops in an ffmpeg example behind the scenes. It’s faster, leaner, and still accurate.&lt;/p&gt;
&lt;p&gt;I also had to rethink risk assessment for local LLMs. Cloud models analyse risk as they generate the command, using a custom format I specify. Local models couldn’t stick to the format reliably, so I added a new “Fast Mode” option for custom models which ONLY outputs the Terminal command, and we use a new hand-coded method to do risk assessment instead.&lt;/p&gt;
&lt;p&gt;The new risk assessment checks for the existence of numerous tool usages in certain patterns. For example, if the command includes something like &lt;code&gt;rm&lt;/code&gt;, it&#39;s flagged high risk; if it’s something harmless like &lt;code&gt;echo&lt;/code&gt;, it’s low. It&#39;s never going to be absolutely bulletproof though; and you can still require confirmation every time if you prefer.&lt;/p&gt;
&lt;p&gt;That&#39;s it! Let me know what you think:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://discord.gg/jgkwAv4H7M&quot;&gt;Join the Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://substage.featurebase.app/&quot;&gt;Request a feature&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Message me on &lt;a href=&quot;https://mastodon.gamedev.place/@joethephish&quot;&gt;Mastodon&lt;/a&gt; or &lt;a href=&quot;https://bsky.app/profile/joe.inkle.co&quot;&gt;Bluesky&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://selkie.design/substage/#email-signup&quot;&gt;Sign up for email updates&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Tue, 25 Mar 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/bring-your-own-ai/</guid>
      <dateString>25th March 2025</dateString>
    </item>
    <item>
      <title>Introducing Substage: A natural language command bar for Finder</title>
      <link>https://joethephish.me/blog/announcing-substage/</link>
      <description>&lt;video class=&quot;video&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
    &lt;source src=&quot;https://assets.selkie.design/substage/videos/multi-step.mp4&quot; type=&quot;video/mp4&quot; /&gt;
&lt;/video&gt;
&lt;p&gt;I made a Mac productivity app! 🎉&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://joethephish.me/substage&quot;&gt;Substage&lt;/a&gt; puts a command bar underneath your Finder windows and lets you use natural language to convert media, manage files, perform calculations, and more!&lt;/p&gt;
&lt;p&gt;Although Substage translates natural language into Terminal commands, I’m hoping it finds a broader audience beyond developers. The number one use case I’ve found for it is converting media—quickly resizing images, re-encoding videos etc. But it can also handle file management, metadata inspection, and general tasks like web requests and calculations.&lt;/p&gt;
&lt;h2&gt;How It Works&lt;/h2&gt;
&lt;p&gt;1️⃣ Converts your natural language request into a Terminal command.&lt;/p&gt;
&lt;p&gt;2️⃣ If it’s potentially risky, it asks for confirmation first.&lt;/p&gt;
&lt;p&gt;3️⃣ Runs the command and summarises the output for you.&lt;/p&gt;
&lt;p&gt;I&#39;d love to hear what you think! You can grab it right here, and take it for a spin:&lt;/p&gt;
&lt;p&gt;👉 &lt;a href=&quot;https://selkie.design/substage/&quot;&gt;Try Substage for free&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;What About &lt;em&gt;Hour by Hour&lt;/em&gt;?&lt;/h2&gt;
&lt;p&gt;In case you&#39;re wondering, my iOS time-planning app &lt;a href=&quot;https://joethephish.me/hour-by-hour&quot;&gt;Hour by Hour&lt;/a&gt; is also still in development!&lt;/p&gt;
</description>
      <pubDate>Tue, 18 Mar 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/announcing-substage/</guid>
      <dateString>18th March 2025</dateString>
    </item>
    <item>
      <title>Hour by Hour has been named!</title>
      <link>https://joethephish.me/blog/hour-by-hour-named/</link>
      <description>&lt;p&gt;After much deliberation, I&#39;ve finally settled on a name for my time planning app: &lt;strong&gt;Hour by Hour&lt;/strong&gt;!&lt;/p&gt;
&lt;p&gt;I explored quite a few options along the way. &amp;quot;Inamo&amp;quot; (as in &amp;quot;in a mo&amp;quot;) had a nice ring to it. &amp;quot;Andiamo&amp;quot; felt energetic. &amp;quot;Thyme&amp;quot; was a cute play on words, and &amp;quot;Day by Selkie&amp;quot; would have tied nicely to future apps under the Selkie umbrella. But they all had issues - either they were already taken by other apps, or carried associations that didn&#39;t quite fit.&lt;/p&gt;
&lt;p&gt;In the end, I kept coming back to &amp;quot;Hour by Hour&amp;quot;. It&#39;s descriptive (which helps people find it on the App Store), but I think it&#39;s also elegant in its simplicity. Sometimes the straightforward choice is the right one!&lt;/p&gt;
&lt;p&gt;I&#39;m excited to be getting close to the first public release.&lt;/p&gt;
&lt;p&gt;I&#39;d love to hear what you think!&lt;/p&gt;
</description>
      <pubDate>Mon, 13 Jan 2025 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/hour-by-hour-named/</guid>
      <dateString>13th January 2025</dateString>
    </item>
    <item>
      <title>Playful visual design for indie apps</title>
      <link>https://joethephish.me/blog/indie-app-visual-design/</link>
      <description>&lt;div class=&quot;contextIntro&quot;&gt;I’m working on a &lt;a href=&quot;https://joethephish.me/&quot;&gt;time-planning app&lt;/a&gt; for iOS. It doesn’t have a name yet, but the idea is to let you sketch out a schedule hour by hour—like working backwards from a flight time to figure out when you need to wake up.&lt;/div&gt;
&lt;p&gt;A while ago, I was listening to the excellent &lt;a href=&quot;https://launchedfm.com/&quot;&gt;Launched podcast&lt;/a&gt; with &lt;a href=&quot;https://bsky.app/profile/charliemchapman.com&quot;&gt;Charlie Chapman&lt;/a&gt; (can’t remember the episode), and they were talking about how it’s almost a cliché for indie developers to design apps “as if Apple had made it.” That really hit me, because it&#39;s exactly where my head was at when I started.&lt;/p&gt;
&lt;p&gt;Here was my first mock-up:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/time-planner-mockup-in-frame@2x.png&quot; alt=&quot;First mockup of my app showing a minimal Apple-like design&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It’s fine, but it felt… generic. The main reference I had in mind was the Apple Reminders app, with the core difference being that each item would have a time attached.&lt;/p&gt;
&lt;p&gt;The thing is, people who gravitate toward indie apps on iPhone often want something more exciting. They’re looking for something playful, something that stands out—something that brings a smile to their face.&lt;/p&gt;
&lt;p&gt;Take apps like &lt;a href=&quot;https://crouton.app/&quot;&gt;Crouton&lt;/a&gt;, &lt;a href=&quot;https://upaheadapp.com/&quot;&gt;Up Ahead&lt;/a&gt; or &lt;a href=&quot;https://www.meetcarrot.com/weather/&quot;&gt;CARROT Weather&lt;/a&gt;. They’re not just useful—they’re fun, they’re quirky, they’re charming. That’s a big part of their appeal. It doesn’t need to be in-your-face like the &lt;a href=&quot;https://www.notboring.software/&quot;&gt;Not Boring&lt;/a&gt; apps (although those are fantastic too!) It might just be a rainbow colour palette here, or an unusual widget there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/3-indie-app-row.png&quot; alt=&quot;Screenshots of beautiful other apps&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Apple doesn’t need to do this to make high quality apps, because they’re already the default. They need to make apps that just get out of the way and get the job done for the masses. Yes, they want their apps to be beautiful in a minimal elegant sense, but they also need to inoffensive and clutter free. (This has been their brand for almost two decades: extreme polish, no rough edges, almost to a fault.)&lt;/p&gt;
&lt;p&gt;Good indie apps are literally &lt;em&gt;colourful&lt;/em&gt;. Instead of a cautious accent colour here and there, they can afford to lean a little more into bold, playful palettes.&lt;/p&gt;
&lt;p&gt;One other thing I’ve noticed is how many apps are using the rounded version of Apple’s San Francisco font. It’s become almost a badge of indie apps—a bit more gentle, quirky and informal than the default UI style:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/sf-rounded-shots@2x.png&quot; alt=&quot;Rounded San Francisco font&quot; /&gt;&lt;/p&gt;
&lt;div class=&quot;caption&quot;&gt;SF Rounded everywhere! Pictured are the excellent &lt;a href=&quot;https://www.yarnbuddy.app/&quot;&gt;Yarn Buddy&lt;/a&gt; and recent Apple iPhone App of the Year winner &lt;a href=&quot;https://tripsy.app/&quot;&gt;Tripsy&lt;/a&gt;.&lt;/div&gt;
&lt;p&gt;They can afford to be, and absolutely should be silly, passionate, edgy and surprising.&lt;/p&gt;
&lt;p&gt;But it’s a fine line. Go too far, and you risk slipping into clunky, overly custom designs—like a bad banking app, or that one annoying local public transport app you have to use. The problem isn’t just that it looks cheap; it’s that it actively gets in the way. Apple’s default UI works for a reason—it’s refined, polished, and it just gets the job done.&lt;/p&gt;
&lt;p&gt;That’s the challenge: how to make the app feel joyful and different, without messing up what makes it usable.&lt;/p&gt;
&lt;p&gt;So here are my latest mock-ups, though I&#39;m sure it&#39;s still got a few more iterations left before the first public release:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/latest-mockup@2x.jpg&quot; alt=&quot;Latest mockup of my app&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It&#39;s still early days, but I&#39;m excited to keep experimenting.  I&#39;ll share more as it comes together.&lt;/p&gt;
</description>
      <pubDate>Tue, 03 Dec 2024 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/indie-app-visual-design/</guid>
      <dateString>3rd December 2024</dateString>
    </item>
    <item>
      <title>Exploring CloudKit and CKSyncEngine for my SwiftUI App</title>
      <link>https://joethephish.me/blog/core-data-vs-cloudkit/</link>
      <description>&lt;video class=&quot;video&quot; autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot;&gt;
    &lt;source src=&quot;https://assets.selkie.design/Demo-vid.mp4&quot; type=&quot;video/mp4&quot; /&gt;
&lt;/video&gt;
&lt;p&gt;If you&#39;re building a SwiftUI app in 2024, you&#39;ll face a key decision: how to handle data persistence and sharing. Modern apps need collaboration—users expect to share and sync their content seamlessly across devices and with others. But implementing robust sync in iOS isn&#39;t straightforward. Let me walk you through my journey with CloudKit, and why I landed on a surprising solution.&lt;/p&gt;
&lt;p&gt;I’m working on a time planning app in SwiftUI with a simple hierarchical structure: a &lt;strong&gt;Schedule&lt;/strong&gt; (top-level &amp;quot;document&amp;quot;) contains multiple &lt;strong&gt;Days&lt;/strong&gt;, and each &lt;strong&gt;Day&lt;/strong&gt; has multiple &lt;strong&gt;Events&lt;/strong&gt;. A key feature is iCloud syncing and sharing—users should be able to collaborate on Schedules seamlessly.&lt;/p&gt;
&lt;p&gt;Since my goal is to stick to Apple&#39;s ecosystem, I turned to CloudKit. A major advantage is that users don&#39;t need a separate login - they&#39;re already signed into iCloud on their devices, making it feel like a seamless, native Apple experience.&lt;/p&gt;
&lt;h2&gt;The CloudKit/Sharing Dilemma&lt;/h2&gt;
&lt;h3&gt;Option 1: SwiftData&lt;/h3&gt;
&lt;p&gt;For my brand new app, I&#39;m trying to use the latest Apple recommended tech. It seeeems promising but &lt;a href=&quot;https://forums.developer.apple.com/forums/thread/756721&quot;&gt;doesn’t yet support sharing&lt;/a&gt;, which I think it a critical feature for my app. Plus, I’ve heard it still has growing pains, so I&#39;d be concerned about using it in production at this stage.&lt;/p&gt;
&lt;h3&gt;Option 2: Core Data + CloudKit&lt;/h3&gt;
&lt;p&gt;This is where I initially made the most progress, since it seemed like the most mature option. It&#39;s also the most popular one that I see mentioned online: &lt;code&gt;NSPersistentCloudKitContainer&lt;/code&gt; supports syncing and recently added sharing.&lt;/p&gt;
&lt;p&gt;As a newcomer to the Apple backend ecosystem, I think I assumed that CloudKit directly synced as Core Data behind the scenes. Unfortunately they&#39;re two separate systems, and although Apple handles most of the translation for you, there are still a bunch of intricacies that can trip you up. So, you still need to understand both systems to use them effectively.&lt;/p&gt;
&lt;p&gt;You need to set up Core Data model files or manually create &lt;code&gt;NSManagedObject&lt;/code&gt; subclasses, then handle synchronization between your app&#39;s models and Core Data entities. This creates a dilemma - either use pure &lt;code&gt;NSManagedObject&lt;/code&gt; subclasses and lose access to SwiftUI&#39;s modern &lt;a href=&quot;https://developer.apple.com/documentation/swiftui/migrating-from-the-observable-object-protocol-to-the-observable-macro&quot;&gt;@Observable macro&lt;/a&gt;, or maintain separate model layers.&lt;/p&gt;
&lt;p&gt;So, when you create a Schedule in my app, it flows through multiple layers: Schedule (your &lt;code&gt;@Observable&lt;/code&gt; app model) gets mapped to ScheduleEntity (Core Data), which then gets translated to a CKRecord (CloudKit) behind the scenes.&lt;/p&gt;
&lt;p&gt;I did get synchronisation working, but sharing via iCloud is when the frustrations began. I thought it would be relatively straightforward since I already had all the data in iCloud, and that it would be a matter of telling it to share a specific root object in a hierarchy or a set of objects.&lt;/p&gt;
&lt;p&gt;Although it seems to be possible to share Core Data via CloudKit as explained by this &lt;a href=&quot;https://developer.apple.com/documentation/coredata/sharing_core_data_objects_between_icloud_users&quot;&gt;Apple documentation&lt;/a&gt;, it seems like yet another layer of complexity.&lt;/p&gt;
&lt;h3&gt;Option 3: CloudKit only?&lt;/h3&gt;
&lt;p&gt;When I realised that CloudKit and Core Data were two completely different technologies that Apple has bridged, it made me start to wonder why I was bothering with Core Data at all. Local persistence in my app is pretty trivial: I don&#39;t expect users to accumulate a vast quantity of data. I can store each Schedule to be a small JSON file.&lt;/p&gt;
&lt;p&gt;However, the CloudKit API is very complex - &lt;code&gt;NSPersistentCloudKitContainer&lt;/code&gt; is doing a lot of heavy lifting for you.&lt;/p&gt;
&lt;h2&gt;Enter CKSyncEngine&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://joethephish.me/blog/img/core-data-and-cloudkit.jpg&quot; alt=&quot;Diagram of Core Data + CloudKit vs CloudKit-only tech stacks&quot; /&gt;&lt;/p&gt;
&lt;p&gt;After more digging, I discovered &lt;strong&gt;&lt;code&gt;CKSyncEngine&lt;/code&gt;&lt;/strong&gt;, a newer CloudKit framework from Apple, introduced in WWDC 2023. CKSyncEngine simplifies syncing by removing Core Data entirely—no translating &lt;code&gt;NSManagedObject&lt;/code&gt; into CloudKit records. That’s one less complex layer in the stack, which I love. Syncing is inherently tricky, so cutting out Core Data removes one huge source of potential bugs and confusion.&lt;/p&gt;
&lt;p&gt;The setup isn’t trivial, but the documentation feels less daunting. Apple’s &lt;a href=&quot;https://www.youtube.com/watch?v=BUFaXlNYokA&quot;&gt;WWDC talk&lt;/a&gt; and the &lt;a href=&quot;https://github.com/apple/sample-cloudkit-sync-engine&quot;&gt;sample project&lt;/a&gt; are decent starting points. &lt;a href=&quot;https://mastodon.social/@jordanmorgan&quot;&gt;Jordan Morgan&lt;/a&gt; from Superwall also published &lt;a href=&quot;https://superwall.com/blog/syncing-data-with-cloudkit-in-your-ios-app-using-cksyncengine-and-swift-and-swiftui&quot;&gt;this helpful guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Syncing is complex, but &lt;code&gt;CKSyncEngine&lt;/code&gt;’s direct CloudKit integration makes it more manageable. No dual-layered technologies fighting for control. I hated having to learn Core Data &lt;em&gt;and&lt;/em&gt; CloudKit when all I wanted was iCloud sharing to &amp;quot;just work.&amp;quot; &lt;code&gt;CKSyncEngine&lt;/code&gt; strips this down to one API. It&#39;s still not as simple as the idealised SwiftData approach, but it&#39;s the best option I&#39;ve found so far. There’s surprisingly little written about this approach. Hopefully, &lt;code&gt;CKSyncEngine&lt;/code&gt; gains traction because it feels like a more modern and efficient way to handle iCloud syncing.&lt;/p&gt;
&lt;h2&gt;The Future&lt;/h2&gt;
&lt;p&gt;I can’t help but feel a little outraged that Apple hasn’t made iCloud sharing easier. The “ideal” Apple app, with CloudKit and collaboration, should be straightforward to build—yet it&#39;s not. Maybe SwiftData will eventually support sharing and mature into the obvious choice for new apps. Until then, &lt;code&gt;CKSyncEngine&lt;/code&gt; seems like the cleanest path forward.&lt;/p&gt;
</description>
      <pubDate>Sat, 16 Nov 2024 24:00:00 GMT</pubDate>
      <dc:creator>Joseph Humfrey</dc:creator>
      <guid>https://joethephish.me/blog/core-data-vs-cloudkit/</guid>
      <dateString>16th November 2024</dateString>
    </item>
  </channel>
</rss>

