<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Chloe Zhou]]></title><description><![CDATA[A self-taught frontend developer writing about real debugging, career transition, and what it actually looks like to grow in tech without a CS degree.]]></description><link>https://chloezhou.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 10:34:52 GMT</lastBuildDate><atom:link href="https://chloezhou.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[My Etsy Experiment Record (August–October 2025)]]></title><description><![CDATA[The starting point and motivation
Around mid-2025, I was looking for ways to increase my income and had long wanted to try a side project that could both train me and bring in some extra money.
Earlie]]></description><link>https://chloezhou.dev/my-etsy-experiment-record-august-october-2025</link><guid isPermaLink="true">https://chloezhou.dev/my-etsy-experiment-record-august-october-2025</guid><category><![CDATA[Entrepreneurship]]></category><category><![CDATA[indie-hacker]]></category><category><![CDATA[etsy]]></category><category><![CDATA[side project]]></category><category><![CDATA[Career]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 08:02:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/70dc2808-74e6-4605-a09c-8fbd6156c482.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The starting point and motivation</h3>
<p>Around mid-2025, I was looking for ways to increase my income and had long wanted to try a side project that could both train me and bring in some extra money.</p>
<p>Earlier that year, I started following a creator on Xiaohongshu (a Chinese social media platform) who shared videos about running an Etsy shop selling digital products and templates. He talked about using AI to create virtual products, and his content made me think: "I could do that too." Digital products don't require inventory, rely mostly on creativity, and have very low upfront costs. I already used AI tools frequently, so I felt confident I could make something work.</p>
<p>These two motivations overlapped — financial pressure and outside inspiration — and together they pushed me to officially start my Etsy experiment in mid-2025.</p>
<p>Before committing, I did some market research: I studied best sellers and star sellers on Etsy, looking at review counts, prices, product images, and descriptions. I tried to estimate their sales by comparing review counts to total sales (reviews typically account for less than 10% of sales). My goal was to determine whether this niche could realistically meet my income target.</p>
<p>I also defined my target users: people who needed financial awareness tools — freelancers, self-employed individuals, or anyone wanting to track their personal budgets.</p>
<h3>First action: choosing the "budgeting printable template" niche</h3>
<p>I decided to start with printable, downloadable templates — monthly budgets, expense trackers, and similar tools. My main advantage was being able to code. I didn't know design tools like Canva, so my first products were built entirely through code.</p>
<p>My first product was actually a bundle: a <strong>Monthly Budget Planner</strong>, an <strong>Expense Tracker</strong>, a <strong>Bill Planner</strong>, a <strong>Debt Tracker</strong>, and a <strong>Savings Goals</strong> template — designed to work as a connected system that helps users understand their spending patterns.</p>
<p>The <strong>Emergency Fund Booster</strong> was designed to make saving purposeful. People often struggle to save when they don't know <em>why</em> they're saving. I wanted to give their savings meaning — preparing for layoffs, unexpected medical bills, or other emergencies.</p>
<h3>Technical path, tool shift, and the Google Sheets product line</h3>
<p>Initially, I relied entirely on coding to build my templates. But I quickly realized I was spending far more time maintaining code than improving designs or expanding my product line.</p>
<p>So I tried designing a product in Canva — and found it was much easier and visually cleaner. The colors, fonts, and layouts looked more appealing. I also created multiple color schemes for users to choose from.</p>
<p>After shifting my mindset toward design rather than pure logic, I explored a niche that better reflected my technical strengths: <strong>Google Sheets templates</strong>. Google Sheets let me combine coding (via Apps Script) with small automation features, making templates more practical and user-friendly.</p>
<p>It took about two weeks to build my first Google Sheets product from concept to final version. During that process, I also learned video editing — static images couldn't effectively showcase how the Sheets worked. I started recording my screen, editing clips, and turning them into marketing videos. That was a personal breakthrough: combining coding, product thinking, and content creation in one project.</p>
<h3>Product launch, SEO, and early data strategy</h3>
<p>My upload process was: use <strong>eRank</strong> to find high-volume, low-competition keywords, feed those into ChatGPT to generate titles and descriptions, and design product images in Canva.</p>
<p>Following advice from the creator I'd been watching, I didn't focus heavily on analytics early on. Etsy's algorithm doesn't give new shops with few listings much visibility, so I focused on building up my product count first.</p>
<p>What I didn't yet understand was marketing logic: what kind of image or description actually drives conversions, or how to effectively communicate a product's value in a listing. Those gaps only became clear later.</p>
<h3>Expanding listings and taking data seriously</h3>
<p>Once my total listings reached around ten, I started checking analytics seriously — and the results were discouraging.</p>
<p>Unlike the printable products, which got at least a few daily impressions, the Google Sheets products had almost none. Zero visibility. The algorithm wasn't distributing them at all.</p>
<p>Seeing that data made me doubt everything: Was the product flawed? Was my positioning wrong? Was my whole approach misguided?</p>
<h3>Adjustments and outcomes</h3>
<p>Faced with nearly zero organic traffic, I first overhauled all my Google Sheets titles, tags, and descriptions with AI's help, then waited about a week. Nothing changed.</p>
<p>Next, I shifted from broad "high search, low competition" keywords to more specific ones that captured my product's unique value. I also set up a <strong>30% off limited-time discount</strong>, hoping to attract my first buyers and reviews — which might trigger the algorithm to show my listings more.</p>
<p>I noticed a small amount of traffic coming from <strong>Pinterest</strong>, so I created an account and started posting pins for my Google Sheets products. The pins got a few impressions but almost no clicks. Overall Etsy traffic stayed extremely low.</p>
<p>Each time I made adjustments, I'd wait a week and check the data — but there was never any meaningful improvement.</p>
<h3>Mindset and the decision to pause</h3>
<p>During this period, I was working a full-time job by day and doing Etsy at night. I invested a lot of energy, but the lack of progress — combined with directionless iteration and constant waiting for algorithmic changes — became exhausting.</p>
<p>I couldn't pinpoint the real problem: Was it the product? The listings? The exposure strategy? My understanding of the target user? That uncertainty pushed me toward burnout.</p>
<p>Eventually, I decided to pause — not to quit, but to give myself space to reflect and recover before moving forward.</p>
<h3>You either succeed or you learn</h3>
<p>Looking back, I've realized that action is everything. You can read, plan, and analyze forever — but nothing changes until you actually do something.</p>
<p>There's really no such thing as failure if you choose to see it as part of your growth. You either succeed, or you learn. This experiment turned out to be one of the most valuable things I did that year — which is why I wanted to write it down, partly for my future self, and partly for anyone who might stumble across it.</p>
<p>I'm reminded of something Charlie Munger said: if you want to avoid mistakes, you first have to know what mistakes look like. If you're thinking about starting an Etsy store, maybe this story can show you a few things <em>not</em> to do.</p>
<p>I still don't know exactly why mine didn't work out. But at least I tried — and that's the point. Because if you never take action, you'll never know what could have happened.</p>
]]></content:encoded></item><item><title><![CDATA[JavaScript Async Programming: From Callbacks to Async/Await]]></title><description><![CDATA[Originally written in 2023. Content may vary slightly across newer versions.

JavaScript's async story didn't start with async/await — it evolved through several patterns, each solving problems the pr]]></description><link>https://chloezhou.dev/javascript-async-programming-from-callbacks-to-async-await</link><guid isPermaLink="true">https://chloezhou.dev/javascript-async-programming-from-callbacks-to-async-await</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[asynchronous JavaScript]]></category><category><![CDATA[webdev]]></category><category><![CDATA[Promises in JavaScript]]></category><category><![CDATA[Frontend Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 07:38:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/c0f8b3a8-8806-4c34-a0ec-6eb2ab027cf8.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2023. Content may vary slightly across newer versions.</p>
</blockquote>
<p>JavaScript's async story didn't start with <code>async/await</code> — it evolved through several patterns, each solving problems the previous one couldn't. This article traces that evolution from callbacks all the way to async functions.</p>
<hr />
<h4>Synchronous vs. asynchronous</h4>
<p>JavaScript is single-threaded — code executes line by line within the call stack. Unless the previous task completes, the next one waits. This is simple to reason about, but if one task runs indefinitely, everything else is blocked and the browser appears frozen.</p>
<p>JavaScript's asynchronous paradigm addresses this. Instead of running in one go, an async task is split into stages with callbacks. HTTP requests are a classic example: once a request is sent, the response won't come back immediately, so a callback handles the result when it does.</p>
<hr />
<h4>Callbacks</h4>
<p>The oldest solution. Given two functions <code>f1</code> and <code>f2</code> where <code>f2</code> depends on <code>f1</code>'s result:</p>
<pre><code class="language-javascript">f1()
f2()
</code></pre>
<p>If <code>f1</code> takes a long time, refactor <code>f2</code> as <code>f1</code>'s callback:</p>
<pre><code class="language-javascript">function f1(callback) {
  setTimeout(function () {
    // f1's code
    callback()
  }, 2000)
}

f1(f2)
</code></pre>
<blockquote>
<p><strong>Pro:</strong> Simple to understand and implement.
<strong>Con:</strong> As logic grows, callbacks become deeply nested, hard to read, and tightly coupled.</p>
</blockquote>
<hr />
<h4>Event-driven programming</h4>
<p>Tasks execute based on whether a specific event fires.</p>
<pre><code class="language-javascript">f1.on('done', f2)

function f1() {
  setTimeout(function () {
    // f1's code
    f1.trigger('done')
  }, 2000)
}
</code></pre>
<blockquote>
<p><strong>Pro:</strong> Easy to decouple; works well with modularisation.
<strong>Con:</strong> The overall logic flow becomes harder to follow as the application grows.</p>
</blockquote>
<hr />
<h4>Publish/Subscribe</h4>
<p>Treat events as signals. A signal centre publishes when a task completes; other tasks subscribe to act on it — also known as the observer pattern.</p>
<p>Using <a href="https://gist.github.com/cowboy/661855">Ben Alman's Tiny Pub/Sub</a>:</p>
<pre><code class="language-javascript">jQuery.subscribe('done', f2)

function f1() {
  setTimeout(function () {
    // f1's code
    jQuery.publish('done')
  }, 2000)
}

jQuery.unsubscribe('done', f2)
</code></pre>
<blockquote>
<p><strong>Pro:</strong> Similar to event-driven, but the signal centre gives a clearer picture of what's happening across the application.</p>
</blockquote>
<hr />
<h4>Promises</h4>
<p>Proposed by CommonJS as a standardised async solution. Every async task returns a Promise with a <code>then</code> method:</p>
<pre><code class="language-javascript">f1().then(f2).then(f3)
</code></pre>
<p>Rewrite <code>f1</code> with a deferred:</p>
<pre><code class="language-javascript">function f1() {
  var dfd = $.Deferred()
  setTimeout(function () {
    // f1's code
    dfd.resolve()
  }, 2000)
  return dfd.promise()
}
</code></pre>
<p>Handle errors:</p>
<pre><code class="language-javascript">f1().then(f2).fail(f3)
</code></pre>
<blockquote>
<p><strong>Pro:</strong> Callbacks are chained rather than nested. If a callback is attached after a task completes, it fires immediately — no risk of missing an event.</p>
</blockquote>
<hr />
<h4>Generators</h4>
<p>Introduced in ES6, generators can pause and resume execution using <code>yield</code>. They make async code read almost like synchronous code:</p>
<pre><code class="language-javascript">function* gen() {
  var url = 'https://api.github.com/users/github'
  var result = yield fetch(url)
  console.log(result.bio)
}
</code></pre>
<p>Executing it manually:</p>
<pre><code class="language-javascript">var g = gen()
var result = g.next()

result.value
  .then(function(data) { return data.json() })
  .then(function(data) { g.next(data) })
</code></pre>
<p>Generators also support two-way data flow and external error handling via <code>.throw()</code>:</p>
<pre><code class="language-javascript">function* gen(x) {
  try {
    var y = yield x + 2
  } catch (e) {
    console.log(e)
  }
  return y
}

var g = gen(1)
g.next()
g.throw('error!') // error!
</code></pre>
<blockquote>
<p><strong>Con:</strong> The two-phase execution pattern can be confusing, and generators require an external executor (like the <code>co</code> library) to run automatically.</p>
</blockquote>
<hr />
<h4>Async/Await</h4>
<p>Introduced in ES7, async functions are syntactic sugar for generators — and considered the ultimate solution to async programming.</p>
<pre><code class="language-javascript">var asyncReadFile = async function () {
  var f1 = await readFile('/etc/fstab')
  var f2 = await readFile('/etc/shells')
  console.log(f1.toString())
  console.log(f2.toString())
}
</code></pre>
<p>Three improvements over generators:</p>
<ol>
<li><strong>Built-in executor</strong> — no <code>co</code> library needed; async functions run like regular functions.</li>
<li><strong>Better semantics</strong> — <code>async</code> and <code>await</code> are self-explanatory compared to <code>*</code> and <code>yield</code>.</li>
<li><strong>Better adaptability</strong> — <code>await</code> accepts both Promises and primitive values.</li>
</ol>
<p>Async functions always return a Promise. Use <code>try...catch</code> for error handling:</p>
<pre><code class="language-javascript">async function main() {
  try {
    const val1 = await firstStep()
    const val2 = await secondStep(val1)
    const val3 = await thirdStep(val1, val2)
    console.log('Final: ', val3)
  } catch (err) {
    console.log(err)
  }
}
</code></pre>
<p>Run independent operations in parallel with <code>Promise.all</code>:</p>
<pre><code class="language-javascript">// slow — sequential
let foo = await getFoo()
let bar = await getBar()

// fast — parallel
let [foo, bar] = await Promise.all([getFoo(), getBar()])
</code></pre>
<hr />
<h4>Summary</h4>
<table>
<thead>
<tr>
<th>Pattern</th>
<th>Readability</th>
<th>Error handling</th>
<th>Coupling</th>
</tr>
</thead>
<tbody><tr>
<td>Callbacks</td>
<td>Low</td>
<td>Manual</td>
<td>High</td>
</tr>
<tr>
<td>Event-driven</td>
<td>Medium</td>
<td>Manual</td>
<td>Medium</td>
</tr>
<tr>
<td>Pub/Sub</td>
<td>Medium</td>
<td>Manual</td>
<td>Low</td>
</tr>
<tr>
<td>Promises</td>
<td>High</td>
<td><code>.catch()</code></td>
<td>Low</td>
</tr>
<tr>
<td>Generators</td>
<td>High</td>
<td><code>try/catch</code></td>
<td>Low</td>
</tr>
<tr>
<td>Async/Await</td>
<td>Highest</td>
<td><code>try/catch</code></td>
<td>Low</td>
</tr>
</tbody></table>
<p><em>References: <a href="http://www.ruanyifeng.com/blog/2012/12/asynchronous%EF%BC%BFjavascript.html">Async JavaScript</a> · <a href="http://www.ruanyifeng.com/blog/2015/04/generator.html">Generator</a> · <a href="http://www.ruanyifeng.com/blog/2015/05/async.html">Async</a></em></p>
]]></content:encoded></item><item><title><![CDATA[Merging vs. Rebasing: Git’s Showdown of Independence and Adaptability]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

Introduction
In the world of Git, two techniques often take center stage: Merging and Rebasing. While both aim to combine ]]></description><link>https://chloezhou.dev/merging-vs-rebasing-git-s-showdown-of-independence-and-adaptability</link><guid isPermaLink="true">https://chloezhou.dev/merging-vs-rebasing-git-s-showdown-of-independence-and-adaptability</guid><category><![CDATA[Git]]></category><category><![CDATA[version control]]></category><category><![CDATA[merge-vs-rebase]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Gitcommands]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 07:08:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/3915c633-023b-495e-adb5-030b92a31f21.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<h4>Introduction</h4>
<p>In the world of Git, two techniques often take center stage: <strong>Merging</strong> and <strong>Rebasing</strong>. While both aim to combine changes from different branches, their methods and implications are vastly different. Understanding these approaches is crucial for managing your project’s history and navigating collaborative workflows effectively.</p>
<h4>What is Merging in Git?</h4>
<p>Merging is like having a big team of people (branches) with their own ideas, and when they all come together, they contribute their changes. The cool thing about merging is that <strong>everyone’s history stays intact</strong>. It’s like a potluck dinner where everyone brings their dish, and the new dish (the merge commit) gets added on top.</p>
<p><strong>Example of Merging</strong></p>
<p>Let’s say you have the following Git history:</p>
<pre><code class="language-shell">A---B---C  (main)

 \

  D---E  (feature-branch)
</code></pre>
<p>When you merge <code>feature-branch</code> into <code>main</code>, Git creates a new <strong>merge commit (M)</strong>. This happens because the branches have <strong>diverged</strong>, meaning they each have unique changes that need to be combined.</p>
<p>Git resolves this by finding the <strong>best common ancestor (BCA)</strong> — the most recent shared commit. In this case, <strong>A</strong> is the BCA, where both <code>main</code> and <code>feature-branch</code> started. Git then takes the changes from <strong>A → C</strong> (on main) and <strong>A → E</strong> (on feature-branch), and merges them into <strong>M</strong>.</p>
<p>The resulting history will look like this:</p>
<pre><code class="language-shell">A---B---C---M  (main)

 \      /

  D---E  (feature-branch)
</code></pre>
<p>Notice how the merge commit ties everything together without altering the history of either branch. <strong>Now, the merge commit (M) has two parent commits</strong>, reflecting the branches that were merged.</p>
<h4>When is a Merge Commit Not Needed? Fast-Forward Merges</h4>
<p>If the branches haven’t diverged, Git doesn’t need a merge commit because it can simply “fast-forward” the <code>main</code> branch to include the commits from <code>feature-branch</code>.</p>
<p>In this case, the BCA will be the <strong>tip of the</strong> <code>main</code> <strong>branch</strong>.</p>
<p>Here’s the example:</p>
<pre><code class="language-shell">A---B---C  (main)

          \
           D---E  (feature-branch)
</code></pre>
<p>In this scenario, <strong>C</strong> is the <strong>BCA</strong> because it’s the last commit shared by both branches. Since main has no new commits after <strong>C</strong>, Git can just “fast-forward” the main branch to include the changes from feature-branch (commits <strong>D</strong> and <strong>E</strong>).</p>
<p>The resulting history will look like this:</p>
<pre><code class="language-shell">A---B---C---D---E  (main)

               (feature-branch)
</code></pre>
<p>It’s a clean and direct update.</p>
<p><strong>Visualizing the Process</strong></p>
<p>Want to see this all come together in a neat, visual way? Use the command:</p>
<pre><code class="language-shell">git log --oneline --graph
</code></pre>
<p>Here’s the thing — — graph is a bit of an unsung hero in Git. This command provides a clear graph of your commit history, making it easy to spot merge commits, diverged branches, and how they come together. It’s simple and incredibly useful — give it a try!</p>
<h4>What is Rebasing in Git?</h4>
<p>Rebasing rewrites history, but what does that really mean?</p>
<p>It’s like taking a time machine to rearrange the sequence of events. Instead of having a merge commit that combines two branches, rebasing makes it appear as though your branch’s changes were always part of the target branch, created right after its latest commit.</p>
<p>When you rebase, Git <strong>moves your branch’s commits to the top of the target branch</strong>, giving you a straight, linear history. But this comes at a cost: the old history of your branch is replaced with a rewritten version.</p>
<p><strong>How rebasing works: a three-step process</strong></p>
<ol>
<li><strong>Start the rebase:</strong> Run the command:</li>
</ol>
<pre><code class="language-bash">git rebase &lt;targetbranch&gt;
</code></pre>
<p>For example, if your feature branch is currently checked out, running <code>git rebase main</code> will prepare Git to rebase <code>feature-branch</code> onto <code>main</code>.</p>
<ol>
<li><strong>Reapply commits one by one:</strong> Git checks out the latest commit on <code>&lt;targetbranch&gt;</code> (commit C on <code>main</code>), then takes each commit from <code>&lt;currentbranch&gt;</code> (D and E) and replays them on top of C, creating new commits <strong>D'</strong> and <strong>E'</strong>.</li>
</ol>
<p>The original commits D and E are gone — their history has been replaced with rewritten versions. This is what "rewriting history" means: Git creates new versions of your commits that appear as if they were made after commit C.</p>
<ol>
<li><strong>Update the branch:</strong> Once all commits have been reapplied, Git updates <code>&lt;currentbranch&gt;</code> to point to the new commits:</li>
</ol>
<pre><code>A---B---C  (main)
             \
              D'---E'  (feature-branch)
</code></pre>
<p><code>feature-branch</code> now has a clean, linear history that follows directly after <code>main</code>.</p>
<blockquote>
<p><strong>Golden rule:</strong> Only rebase private branches. For shared branches like <code>main</code>, always use merging to avoid rewriting history that others rely on.</p>
</blockquote>
<h4>Merging vs. rebasing: what's the difference?</h4>
<p>Both merging and rebasing integrate changes from different branches, but in very different ways:</p>
<ol>
<li><p><strong>History:</strong> Merging keeps the history of both branches intact and creates a new commit that ties them together — like a time capsule. Rebasing rewrites history by replaying your commits on top of the target branch, making the timeline cleaner but changing how things appear to have happened.</p>
</li>
<li><p><strong>Commit creation:</strong> Merging creates one new merge commit that combines everything. Rebasing creates multiple new commits — one for each of your original commits — each reapplied onto the target branch.</p>
</li>
<li><p><strong>History shape:</strong> Merging produces a non-linear history with merge commits accumulating over time. Rebasing produces a straight, linear history with no merge commits.</p>
</li>
</ol>
<h4>When to use merging or rebasing?</h4>
<p><strong>Use merging if:</strong></p>
<ul>
<li>You're working in a team and want a record of when branches were combined.</li>
<li>You want a complete history that shows all merges.</li>
<li>You want to avoid rewriting anything.</li>
</ul>
<p><strong>Use rebasing if:</strong></p>
<ul>
<li>You prefer a clean, linear history with no merge commits.</li>
<li>You're working solo or on a private feature branch.</li>
<li>You're comfortable rewriting history on your own branch — but never on a branch others are already working with.</li>
</ul>
<h4>Merging vs. Rebasing: Independence vs. Adaptability</h4>
<p>Here’s where things get interesting. I started thinking about these Git concepts from a personal perspective. Merging and rebasing aren’t just technical terms; they kind of reflect how we approach life, too.</p>
<p><strong>Merging and Independence</strong></p>
<p>Merging is like <strong>standing your ground</strong>. You’re your own person with your own history, and you’re just adding someone else’s history into your own without changing your past. It’s all about <strong>maintaining your independence</strong> while still being open to collaboration. You might take on someone else’s ideas, but you don’t change who you are.</p>
<p><strong>Rebasing and Adaptability</strong></p>
<p>Rebasing, on the other hand, is like being <strong>flexible</strong> and willing to <strong>adapt</strong>. It’s like saying, “Okay, I see what you did, and I’ll place my story in your timeline so that it makes sense.” It’s rewriting your own history to fit better with someone else’s. Rebasing is less independent because you’re conforming your story to someone else’s, but sometimes, that’s what we need to do in life to get things to work smoothly.</p>
<p><strong>Conclusion</strong></p>
<p>Merging and rebasing are both vital tools in your Git toolbox, but which one you use depends on the situation and your desired outcome. Merging is like keeping your own story intact while incorporating others, while rebasing is like rewriting your story to fit into another’s timeline. Both have their place in the world of version control (and life), so knowing when to use each will help you navigate your projects — and your personal development — with a bit more ease.</p>
]]></content:encoded></item><item><title><![CDATA[Simplified Guide: Integrating Icons with react-native-vector-icons in React Native (2024)]]></title><description><![CDATA[The official documentation from react-native-vector-icons is your go-to guide, but things don't always match up exactly as expected — especially on bare React Native projects. This guide is here to fi]]></description><link>https://chloezhou.dev/simplified-guide-integrating-icons-with-react-native-vector-icons-in-react-native-2024</link><guid isPermaLink="true">https://chloezhou.dev/simplified-guide-integrating-icons-with-react-native-vector-icons-in-react-native-2024</guid><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[iOS]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 03:56:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/ba62962f-ce3a-4dab-a854-9d87e6691cac.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The official <a href="https://github.com/oblador/react-native-vector-icons">documentation</a> from <code>react-native-vector-icons</code> is your go-to guide, but things don't always match up exactly as expected — especially on bare React Native projects. This guide is here to fill the gaps.</p>
<h4>Development environment</h4>
<ul>
<li>Xcode: 14.3.1</li>
<li>macOS: Ventura 13.0.1</li>
<li>React: 18.2.0</li>
<li>React Native: 0.73.6</li>
<li><code>react-native-vector-icons</code>: 10.0.3</li>
</ul>
<blockquote>
<p>In this guide, italic text is quoted directly from the official documentation, and <strong>bold text</strong> is my own commentary.</p>
</blockquote>
<h4>Installation</h4>
<pre><code class="language-bash">npm install --save react-native-vector-icons
</code></pre>
<h4>iOS setup</h4>
<p><strong>Open your iOS project in Xcode</strong> by navigating to your project directory and opening the <code>.xcworkspace</code> file.</p>
<p><strong>Copy fonts to your Xcode project:</strong></p>
<p><em>Navigate to <code>node_modules/react-native-vector-icons</code> and drag the <code>Fonts</code> folder (or specific fonts) into your Xcode project. Make sure your app is checked under "Add to targets," and if adding the whole folder, check "Create groups."</em></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*1SxFnhnpYK4IFnFbpjUeVw.png" alt="" /></p>
<p><strong>Your folder structure should look like this:</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*xGw-uxxf1NXGS5SrDabG2Q.png" alt="" /></p>
<p><strong>Edit <code>Info.plist</code></strong> and add a property called <code>Fonts provided by application</code> (or <code>UIAppFonts</code> if Xcode autocomplete isn't working):</p>
<p><em>Full list of available fonts to copy into <code>Info.plist</code>:</em></p>
<pre><code class="language-xml">&lt;key&gt;UIAppFonts&lt;/key&gt;
&lt;array&gt;
  &lt;string&gt;AntDesign.ttf&lt;/string&gt;
  &lt;string&gt;Entypo.ttf&lt;/string&gt;
  &lt;string&gt;EvilIcons.ttf&lt;/string&gt;
  &lt;string&gt;Feather.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome5_Brands.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome5_Regular.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome5_Solid.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome6_Brands.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome6_Regular.ttf&lt;/string&gt;
  &lt;string&gt;FontAwesome6_Solid.ttf&lt;/string&gt;
  &lt;string&gt;Foundation.ttf&lt;/string&gt;
  &lt;string&gt;Ionicons.ttf&lt;/string&gt;
  &lt;string&gt;MaterialIcons.ttf&lt;/string&gt;
  &lt;string&gt;MaterialCommunityIcons.ttf&lt;/string&gt;
  &lt;string&gt;SimpleLineIcons.ttf&lt;/string&gt;
  &lt;string&gt;Octicons.ttf&lt;/string&gt;
  &lt;string&gt;Zocial.ttf&lt;/string&gt;
  &lt;string&gt;Fontisto.ttf&lt;/string&gt;
&lt;/array&gt;
</code></pre>
<p><strong>Check Build Phases:</strong></p>
<p><em>In Xcode, select your project in the navigator, choose your app's target, go to the <code>Build Phases</code> tab, and under <code>Copy Bundle Resources</code>, add the copied fonts.</em></p>
<p><strong>In some cases the fonts may already be listed under "Copy Bundle Resources" — if so, no action needed.</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*xJD4eAksmsBhrh_BZrOAYg.png" alt="" /></p>
<p><strong>Create <code>react-native.config.js</code>:</strong></p>
<p><em>When using auto-linking, all fonts are automatically added to <code>Build Phases &gt; Copy Pods Resources</code>, which bloats your bundle. To prevent this, create a <code>react-native.config.js</code> file at the root of your project:</em></p>
<p><strong>If the file doesn't exist yet, create it and set the iOS configuration to null.</strong></p>
<pre><code class="language-javascript">module.exports = {
  dependencies: {
    'react-native-vector-icons': {
      platforms: {
        ios: null,
      },
    },
  },
};
</code></pre>
<blockquote>
<p><strong>Note:</strong> If you encounter a "Multiple commands produce…" error after following the official docs, remove any duplicate font references under "Copy Bundle Resources". See <a href="https://github.com/oblador/react-native-vector-icons/issues/1074">this issue thread</a> for the fix with the most upvotes.</p>
</blockquote>
<p>Rebuild your project — you're ready to use vector icons.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*Yn7QHavMrCkChA4ukY7ZuQ.png" alt="iOS" /></p>
<h4>Android setup</h4>
<p><strong>Edit <code>android/app/build.gradle</code></strong> (not <code>android/build.gradle</code>) and add:</p>
<pre><code class="language-gradle">apply from: file("../../node_modules/react-native-vector-icons/fonts.gradle")
</code></pre>
<p><strong>Copy fonts to your Android project:</strong></p>
<p><em>Navigate to <code>android/app/src/main/assets/fonts</code> and paste the <code>Fonts</code> folder there.</em></p>
<p><strong>If the <code>assets</code> folder doesn't exist, create it — use lowercase for all folder names.</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*5yroV7mUVpGisXMeBwxM4Q.png" alt="Android" /></p>
]]></content:encoded></item><item><title><![CDATA[Leveraging SVGs in React Native: A Comprehensive Guide]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

Integrating SVGs into React Native applications provides numerous benefits, allowing for dynamic styling and seamless hand]]></description><link>https://chloezhou.dev/leveraging-svgs-in-react-native-a-comprehensive-guide</link><guid isPermaLink="true">https://chloezhou.dev/leveraging-svgs-in-react-native-a-comprehensive-guide</guid><category><![CDATA[SVG]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[React Native]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[Mobile Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 03:43:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/b438a054-1f4e-488a-be92-cac7effbf610.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<p>Integrating SVGs into React Native applications provides numerous benefits, allowing for dynamic styling and seamless handling of SVG files. This guide outlines <strong>two key approaches</strong> to harnessing the power of SVGs in React Native:</p>
<ol>
<li><p><strong>Utilizing Higher-Order Components (HOCs):</strong> HOCs serve as a powerful tool for enhancing the functionality of components in React Native applications. By implementing HOCs, developers can easily modify the color and size of SVG icons dynamically. This approach offers flexibility and reusability, enabling consistent styling across various components.</p>
</li>
<li><p><strong>Leveraging <code>react-native-transformer</code> for SVG Handling:</strong> The <code>react-native-transformer</code> library facilitates the seamless integration of SVG files into React Native projects. It converts SVG XML text into React components that can be recognized and rendered by React Native, streamlining the management of SVG assets and ensuring compatibility with React Native's rendering engine.</p>
</li>
</ol>
<h4>SVG Transformation with React Native SVG Transformer</h4>
<pre><code class="language-javascript">const { getDefaultConfig } = require("metro-config");

module.exports = (async () =&gt; {
  const {
    resolver: { sourceExts, assetExts },
  } = await getDefaultConfig();
  return {
    transformer: {
      babelTransformerPath: require.resolve("react-native-svg-transformer"),
    },
    resolver: {
      assetExts: assetExts.filter((ext) =&gt; ext !== "svg"),
      sourceExts: [...sourceExts, "svg"],
    },
  };
})();
</code></pre>
<p>This code configures Metro (the React Native bundler) to handle SVG files correctly. By default, Metro doesn't know how to process SVGs, so this configuration is necessary.</p>
<p>Here's what it does:</p>
<ol>
<li>Calls <code>getDefaultConfig()</code> to obtain the default Metro configuration.</li>
<li>Retrieves the current resolver settings, including supported source and asset file extensions.</li>
<li>Adds <code>.svg</code> to the list of source file extensions so Metro can recognize and process SVG files.</li>
<li>Uses <code>react-native-svg-transformer</code> to convert SVG files into React components that can be imported directly.</li>
<li>Removes <code>.svg</code> from the asset file extensions list, since SVG files are now treated as source files instead.</li>
</ol>
<h4>Creating a Higher-Order Component (HOC) for SVGs</h4>
<pre><code class="language-javascript">import React from "react";

import Colors from "theme/Colors";

export const withSvgIcon = (Component) =&gt; {
  const WithSvgIcon = ({ color = Colors.primary, width, height }) =&gt; {
    return (
      &lt;&gt;
        {width &amp;&amp; height}
        ? (
        &lt;Component style={{ color }} with={width} height={height} /&gt;
        ) : (
        &lt;Component style={{ color }} /&gt;)
      &lt;/&gt;
    );
  };

  return WithSvgIcon;
};
</code></pre>
<p><code>withSvgIcon</code> is a higher-order component (HOC) that wraps an SVG component and exposes three props: <code>color</code> (defaults to <code>Colors.primary</code>), <code>width</code>, and <code>height</code>.</p>
<p>When <code>width</code> and <code>height</code> are provided, they are applied directly to the SVG. Otherwise, the component falls back to its default dimensions.</p>
<h4>Dynamic Styling of SVGs</h4>
<pre><code class="language-javascript">import React from "react";
import { withSvgIcon } from "./withSvgIcon";
import iconSvgContent from "./assets/icon.svg";

const IconComponent = withSvgIcon(iconSvgContent);

const App = () =&gt; {
  return (
    &lt;View&gt;
      &lt;IconComponent color="red" width={20} height={20} /&gt;
    &lt;/View&gt;
  );
};

export default App;
</code></pre>
<p>By combining these two approaches, SVG files are transformed into renderable React components with customizable <code>color</code>, <code>width</code>, and <code>height</code> props.</p>
<p><strong>Recap</strong></p>
<ol>
<li><strong>SVG icon import</strong>: Import SVG icons into the project.</li>
<li><strong>Metro configuration</strong>: Set up <code>react-native-svg-transformer</code> to handle SVG files.</li>
<li><strong><code>withSvgIcon</code> HOC</strong>: Add dynamic styling capabilities to SVG components.</li>
<li><strong>Usage</strong>: Use the styled SVG icons within your React Native components.</li>
</ol>
<h4>Troubleshooting</h4>
<p><strong>Why isn't the <code>color</code> prop changing my SVG icon's color?</strong></p>
<p>Make sure the <code>fill</code> attribute in your SVG file is set to <code>currentColor</code>. Without this, the <code>color</code> prop has no effect — the SVG will render with its hardcoded fill instead of inheriting the color from the parent component.</p>
<p><strong>Example:</strong></p>
<p>In your SVG file, set <code>fill="currentColor"</code> on the path:</p>
<pre><code class="language-svg">&lt;svg width="24" height="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;
  &lt;path fill="currentColor" d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2z"/&gt;
&lt;/svg&gt;
</code></pre>
<p><code>currentColor</code> tells the SVG element to inherit the text color of its parent — which is how the <code>color</code> prop gets applied.</p>
<p><strong>Still running into issues?</strong></p>
<p>Check the <a href="https://github.com/kristerkari/react-native-svg-transformer">react-native-svg-transformer documentation</a> for dependency requirements and Metro configuration differences across React Native versions.</p>
<p>Don’t hesitate to drop a comment, hit that follow button for more React Native goodness! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Data Migration Design in a CLI Tool: From Local JSON to Cloud Database]]></title><description><![CDATA[Originally written in 2025. Content may vary slightly across newer versions.

Data Migration Design in a CLI Tool: From Local JSON to Cloud Database
Introduction
I built a simple CLI tool that lets us]]></description><link>https://chloezhou.dev/data-migration-design-in-a-cli-tool-from-local-json-to-cloud-database</link><guid isPermaLink="true">https://chloezhou.dev/data-migration-design-in-a-cli-tool-from-local-json-to-cloud-database</guid><category><![CDATA[cli]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Databases]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 02:04:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/2c33b1c0-9877-4e76-93c2-07fa819acc9e.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2025. Content may vary slightly across newer versions.</p>
</blockquote>
<h3>Data Migration Design in a CLI Tool: From Local JSON to Cloud Database</h3>
<h3>Introduction</h3>
<p>I built a simple <a href="https://www.npmjs.com/package/czhou-notes-cli">CLI tool</a> that lets users take notes from the terminal. In version 1, there was no login and no cloud — notes were just saved to a local JSON file on the user’s computer.</p>
<p>That setup worked fine at the beginning. But as the tool grew, I added basic authentication and moved storage to the cloud (using <a href="https://supabase.com/">Supabase</a>), so users could access their notes across machines.</p>
<p>That change introduced a new problem:</p>
<p><strong>What about users who already had notes saved locally?</strong></p>
<p>If I just switched systems without thinking it through, they’d open the new version and find their notes gone.</p>
<p>This post explains how I handled that: designing a migration system that moves notes from the local file to the cloud — safely, clearly, and without making a mess.</p>
<h3>Migration Means Moving Houses</h3>
<p>When I was trying to understand how to approach migration, I asked AI for help. It gave me a useful analogy:</p>
<p><strong>“It’s like moving houses.”</strong></p>
<p>You have stuff in your old home — your local JSON file — and you want to move it into a new one — your cloud database. That sounds simple, but moving isn’t just about copying boxes from one place to another. It’s about making sure everything still works in the new place, nothing important gets lost, and the move itself doesn’t break anything.</p>
<p>Here’s what that actually means.</p>
<h4><strong>It’s not just copying — it’s cleaning up</strong></h4>
<p>In version 1, all notes were saved in a single JSON file on the user’s machine. That file might contain empty notes, malformed entries, or slightly inconsistent formatting.</p>
<p>But version 2 uses a structured cloud database (Supabase), where everything needs to follow a fixed schema. So before I can move anything, I need to <strong>sanitize</strong> the data:</p>
<ul>
<li><p>Removing notes that are clearly empty or invalid</p>
</li>
<li><p>Making sure each note is shaped like an object with a <code>content</code> field, and a <code>tags</code>field, which is an array</p>
</li>
</ul>
<p>In other words, before I move in, I clean the data up.</p>
<h4><strong>The data’s meaning also changes</strong></h4>
<p>In version 1, the tool assumed that <strong>one computer = one user</strong>. Notes were tied to the machine.</p>
<p>In version 2, notes are tied to a <strong>cloud user account</strong>. Now, you can log in from anywhere and access your notes. That’s a big shift in meaning:</p>
<ul>
<li><p>The same note is now “owned” by a user, not a device</p>
</li>
<li><p>Access is controlled by authentication, not file access</p>
</li>
<li><p>Notes can now live across machines, not just one</p>
</li>
</ul>
<p>So part of the migration is not just “moving files,” but <strong>changing what the data represents</strong>.</p>
<h4><strong>Users need to know what’s happening</strong></h4>
<p>You don’t move into a new house without telling your friends. Migration needs to be transparent too.</p>
<p>If I silently switch to the cloud without warning, users might open the new version and think all their notes are gone. That’s bad UX. So I designed the migration to include:</p>
<ul>
<li><p>A check to see if old notes exist</p>
</li>
<li><p>A prompt letting users choose whether to migrate</p>
</li>
<li><p>Clear feedback on what was migrated and what wasn’t</p>
</li>
</ul>
<p>Good UX means <strong>telling users what’s going on</strong>, not making them guess.</p>
<h4><strong>Bringing it together</strong></h4>
<p>Migration isn’t just a file transfer. It’s three things working together:</p>
<ul>
<li><p><strong>Data transformation:</strong> Cleaning and reshaping the notes so they can live in a database.</p>
</li>
<li><p><strong>State change:</strong> Notes now belong to a cloud account, not just a local machine.</p>
</li>
<li><p><strong>User experience:</strong> Letting users know what’s happening and giving them control.</p>
</li>
</ul>
<p>That’s what it really means to move from version 1 to version 2. It’s not just a new backend — it’s a change in what data is, who owns it, and how users interact with it.</p>
<h3>The Migration Strategy: Five Steps</h3>
<p>Once I understood what this migration really meant, I broke the process down into five concrete steps. Each step solves a specific problem, and together, they ensure the user’s data moves safely from version 1 to version 2.</p>
<h4>Detection — Does the user have old data?</h4>
<p>Before running any migration, I need to know whether there’s anything to migrate.</p>
<p>The version 1 CLI tool stored notes locally in a known file path. So the first step is to check if that file exists on the user’s machine.</p>
<p>If the file isn’t there, then this user is either new or never saved anything — no migration needed.</p>
<h4>Safety — Make a backup before touching anything</h4>
<p>Even if migration is simple, data loss is never acceptable.</p>
<p>Before doing anything else, I make a full backup of the original JSON file. This way, if something goes wrong during migration — or if the user just wants to revert — they can recover their data.</p>
<p>This is a simple but important rule: <strong>never destroy the original source</strong>.</p>
<h4>Translation — Clean up and reshape the data</h4>
<p>The local notes file was flexible. In version 2, the data needs to follow a specific structure to be accepted by the cloud database.</p>
<p>So before inserting anything, I sanitize each note:</p>
<ul>
<li><p>Remove empty notes (e.g. whitespace-only)</p>
</li>
<li><p>Skip notes with missing or malformed fields</p>
</li>
<li><p>Make sure each note has the expected shape</p>
</li>
</ul>
<p>This is the “translation” step: same ideas, but restructured to fit the new format.</p>
<h4>Transfer — Save the cleaned notes to the cloud</h4>
<p>After cleanup, I use the database utility functions I already built (like <code>createNote</code>) to send each note to Supabase. These functions are the same ones used by the app during normal use — so the migrated notes behave exactly like new ones.</p>
<p>If any note fails to save, I don’t crash the whole process. I log it, skip it, and continue.</p>
<h4>Verification — Show the result to the user</h4>
<p>Once the migration runs, I want to tell the user exactly what happened.</p>
<p>So I track:</p>
<ul>
<li><p>How many notes were found</p>
</li>
<li><p>How many were skipped</p>
</li>
<li><p>How many were successfully migrated</p>
</li>
</ul>
<p>The CLI shows a short summary at the end, so the user knows whether everything worked — or if they need to take a closer look.</p>
<h4>Why this approach works</h4>
<p>This strategy covers both data integrity and user experience:</p>
<ul>
<li><p>Each step is small, focused, and testable</p>
</li>
<li><p>If anything fails, users don’t lose their data</p>
</li>
<li><p>Users are never left guessing what just happened</p>
</li>
</ul>
<h3>UX Considerations</h3>
<p>I didn’t want the migration to interrupt users. If they don’t have old data, it should just be invisible.</p>
<p>If they do, they should get a clear path — but only when they need it.</p>
<h4>It only checks once, during setup</h4>
<p>When the user runs <code>note setup</code>, the CLI checks if there’s a local JSON file from version 1.</p>
<p>If it exists, the CLI shows this:</p>
<pre><code class="language-bash">Legacy notes detected!
You can run:
  notes migrate check   # See what can be migrated
  notes migrate         # Perform the migration
</code></pre>
<p>If there’s no local data, nothing happens. No prompt, no message.</p>
<p>This avoids showing unnecessary stuff to new users.</p>
<h4>Output is simple and direct</h4>
<p>When <code>note migrate</code>runs, the CLI prints something like:</p>
<pre><code class="language-bash">Found 8 local notes.
6 migrated successfully.
2 skipped (empty or invalid).
</code></pre>
<p>It doesn’t hide anything. If a note fails, it’s listed and skipped.</p>
<p>The point is to tell users exactly what was moved, and what wasn’t.</p>
<h4>It won’t run again once migration is done</h4>
<p>After a successful migration, the CLI deletes the old JSON file and archives a copy in a separate folder.</p>
<p>So if the user runs <code>note migrate</code> again, the tool won’t find anything to migrate — it just says:</p>
<pre><code class="language-bash">No legacy notes found. Nothing to migrate.
</code></pre>
<p>There’s no per-note tracking. Migration is a one-time thing.</p>
<p>If something goes wrong, the backup is still there.</p>
<h3>Key Principles Behind Good Migrations</h3>
<p>When I put together this migration logic, I mostly followed a few basic rules.</p>
<h4><strong>Back up first</strong></h4>
<p>Before doing anything, the CLI makes a copy of the notes file (e.g. <code>db-backup.json</code> in the same folder. If something breaks, the original file is still there.</p>
<h4><strong>If something fails, keep going</strong></h4>
<p>One bad note shouldn’t block everything.</p>
<p>The CLI skips anything invalid, and finishes what it can.</p>
<p>No need to roll back or crash the whole thing.</p>
<h4><strong>Say what’s happening</strong></h4>
<p>This isn’t a silent background process.</p>
<p>If notes are being moved, the CLI tells you how many it found, how many migrated, and what got skipped.</p>
<h4><strong>Let the user choose</strong></h4>
<p>Migration only runs if the user decides to.</p>
<p>The CLI checks once during <code>note setup</code>, but it’s up to the user whether to run <code>note migrate</code>.</p>
<p>No auto-magic.</p>
<h4><strong>Safe to run more than once</strong></h4>
<p>If the migration already happened, running it again just says:</p>
<pre><code class="language-bash">No legacy notes found. Nothing to migrate.
</code></pre>
<p>No side effects, no surprises.</p>
<h3>Conclusion</h3>
<p>Working on this migration helped me see that data migration is more than just a technical task. It touches multiple areas — technology, product design, and user experience.</p>
<p>Even for a lightweight CLI tool, data needs to be treated as a valuable user asset. Losing data means losing user trust, and that’s not easy to regain.</p>
<p>A good migration might run only once — but how it’s designed says a lot about how much you care about your users.</p>
]]></content:encoded></item><item><title><![CDATA[Error Handling in CLI Tools: A Practical Pattern That’s Worked for Me]]></title><description><![CDATA[Originally written in 2025. Content may vary slightly across newer versions.

Error Handling in CLI Tools: A Practical Pattern That’s Worked for Me
I’ve been building a small CLI tool recently to help]]></description><link>https://chloezhou.dev/error-handling-in-cli-tools-a-practical-pattern-that-s-worked-for-me</link><guid isPermaLink="true">https://chloezhou.dev/error-handling-in-cli-tools-a-practical-pattern-that-s-worked-for-me</guid><category><![CDATA[cli]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[error handling]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 01:48:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/7aa12032-4ac0-41a6-8be3-0189b2d09ee2.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2025. Content may vary slightly across newer versions.</p>
</blockquote>
<h3><strong>Error Handling in CLI Tools: A Practical Pattern That’s Worked for Me</strong></h3>
<p>I’ve been building a small CLI tool recently to help manage personal notes from the terminal. It’s a simple project, but adding features like persistent user sessions and database access made me think more seriously about error handling.</p>
<p>In particular, I wanted to find a balance between surfacing helpful messages to users while keeping my codebase clean and predictable. This post documents the approach I landed on, why I chose it, and how it plays out in a few real command implementations.</p>
<h4>Why Error Handling Matters in CLI Tools</h4>
<p>When designing error handling for a CLI tool, my goal was to make sure that any failure a user runs into is:</p>
<ul>
<li><p><strong>Human-readable</strong></p>
</li>
<li><p><strong>Actionable</strong></p>
</li>
<li><p><strong>Context-aware</strong></p>
</li>
</ul>
<p>To get there, I explored common error handling patterns in async JavaScript — specifically how to structure error throwing in utility functions versus catching in command handlers, and how to categorize different types of errors. I ended up with an approach that distinguishes between <strong>expected errors</strong>, <strong>system errors</strong>, and <strong>business logic errors</strong>.</p>
<h4>Two Common Patterns</h4>
<ul>
<li><p><strong>Pattern 1: Throw Errors</strong> (Recommended for CLI)</p>
</li>
<li><p><strong>Pattern 2: Return Error Objects</strong></p>
</li>
</ul>
<p>Let me show you what they look like in practice.</p>
<h4>Pattern 1: Throw Errors (Recommended for CLI)</h4>
<p>This pattern has low-level functions throw errors when something goes wrong. The errors <strong>bubble up</strong> to the command handler, which catches them and displays a friendly message.</p>
<pre><code class="language-javascript">export const saveUserSession = async (user) =&gt; {
  try {
    await ensureUserDir();
    const sessionData = { 
      id: user.id, 
      username: user.username, 
      loginTime: new Date().toISOString() 
    };
    await writeFile(
      USER_SESSION_PATH, 
      JSON.stringify(sessionData, null, 2), 
      'utf-8'
    );
    return sessionData;
  } catch (error) {
    throw new Error(
      `Could not save user session: ${error.message}`
    );
  }
};
</code></pre>
<p>The command handler then handles all errors in one place:</p>
<pre><code class="language-javascript">.command(
  'setup &lt;username&gt;', 
  'Setup user', 
  {}, 
  async (argv) =&gt; {
  try {
    const user = await findOrCreateUser(argv.username);
    await saveUserSession(user);
    console.log(
      `✅ Successfully logged in as: ${user.username}`
    );
  } catch (error) {
    console.error('❌', error.message);
    process.exit(1);
  }
});
</code></pre>
<p>This approach keeps the command code clean and focused. You only deal with errors <strong>once</strong>, and you get to present <strong>consistent</strong> messages.</p>
<h4>Pattern 2: Return Error Objects (Alternative)</h4>
<p>Here, low-level functions catch errors themselves and return objects indicating success or failure.</p>
<pre><code class="language-javascript">export const saveUserSession = async (user) =&gt; {
  try {
    await ensureUserDir();
    const sessionData = { 
      id: user.id, 
      username: user.username, 
      loginTime: new Date().toISOString() 
    };
    await writeFile(
      USER_SESSION_PATH, 
      JSON.stringify(sessionData, null, 2), 
      'utf-8'
    );
    return { success: true, data: sessionData };
  } catch (error) {
    return { 
      success: false, 
      error: `Could not save user session: ${error.message}` 
    };
  }
};
</code></pre>
<p>Then every caller must check the returned object explicitly:</p>
<pre><code class="language-javascript">.command(
  'setup &lt;username&gt;', 
  'Setup user', 
  {}, 
  async (argv) =&gt; {
  const userResult = await findOrCreateUser(argv.username);
  if (!userResult.success) {
    console.error('❌', userResult.error);
    process.exit(1);
    return;
  }

  const sessionResult = await saveUserSession(userResult.data);
  if (!sessionResult.success) {
    console.error('❌', sessionResult.error);
    process.exit(1);
    return;
  }

  console.log(
    `✅ Successfully logged in as: ${userResult.data.username}`
  );
});
</code></pre>
<p>While this pattern makes errors explicit, it can lead to repetitive and verbose code, especially in command handlers.</p>
<h4>Why I Prefer Pattern 1 (Throw Errors)</h4>
<p>This pattern feels like a better fit for CLI tools:</p>
<ul>
<li><p><strong>Low-level modules</strong> throw meaningful errors when things go wrong</p>
</li>
<li><p><strong>Errors bubble up</strong> automatically through the call stack</p>
</li>
<li><p><strong>Top-level command handlers</strong> catch them once and show user-friendly messages</p>
</li>
<li><p><strong>Exit codes</strong> tell the shell that something failed</p>
</li>
</ul>
<p>This keeps responsibilities clear: helper functions focus on their job, command handlers focus on user communication.</p>
<h4>Error Handling Strategy by Type</h4>
<h4>1. Expected “Errors” (Not Really Errors)</h4>
<p>Some conditions aren’t really errors — they’re just normal edge cases that we expect to happen occasionally. For example, if there’s no session file, that simply means the user hasn’t logged in yet.</p>
<pre><code class="language-javascript">export const getUserSession = async () =&gt; {
  try {
    await access(USER_SESSION_PATH);
    const sessionData = await readFile(
        USER_SESSION_PATH, 
        'utf-8'
    );
    return JSON.parse(sessionData);
  } catch (error) {
    if (error.code === 'ENOENT') {
      return null; 
      // File doesn't exist = no session (EXPECTED)
    }
    throw error; // Unexpected error
  }
};
</code></pre>
<h4>2. System Errors</h4>
<p>These usually come from the underlying platform — e.g. Node.js APIs, the file system, or corrupted files. They’re rare but should be surfaced with context.</p>
<pre><code class="language-javascript">export const saveUserSession = async (user) =&gt; {
  try {
    await writeFile(
      USER_SESSION_PATH,                            
      JSON.stringify(sessionData)
    );
    return sessionData;
  } catch (error) {
    // Transform technical error into user-friendly message
    throw new Error(
      `Could not save user session: ${error.message}`
    );
  }
};
</code></pre>
<h4>3. Business Logic Errors</h4>
<p>These happen when users violate your application’s rules or skip required steps. The system works fine, but the user needs to do something differently.</p>
<pre><code class="language-javascript">export const requireUserSession = async () =&gt; {
  const session = await getUserSession();
  if (!session) {
    // This is a business rule violation
    throw new Error(
      'No user session found. Please run "note setup &lt;username&gt;" first.'
    );
  }
  return session;
};
</code></pre>
<p><strong>The Key Insight</strong></p>
<p>Notice how each type gets handled differently:</p>
<p><strong>Expected conditions</strong> → Return <code>null</code> or default values, don’t throw</p>
<p><strong>System errors</strong> → Wrap with context, then throw</p>
<p><strong>Business logic errors</strong> → Throw with clear instructions for the user</p>
<p>This approach means your command handlers can catch everything with one <code>try/catch</code>, but users get appropriate messages for each situation.</p>
<h4>Complete Error Flow Example</h4>
<p>Let’s walk through how the full error handling flow works — from throwing to catching to presenting.</p>
<p><strong>Low-Level: Throw with Context</strong></p>
<pre><code class="language-javascript">export const clearUserSession = async () =&gt; {
  try {
    await unlink(USER_SESSION_PATH);
    return true;
  } catch (error) {
    if (error.code === 'ENOENT') {
      return true; 
      // File doesn't exist = mission accomplished anyway
    }
    throw new Error(
      `Failed to clear session: ${error.message}`
    );
  }
};
</code></pre>
<p>At this level, we care about <em>what</em> failed, not <em>how</em> to explain it to the user. We handle the expected case (no file) and throw system errors with context.</p>
<p><strong>Business Logic Layer: Enforce Rules</strong></p>
<pre><code class="language-javascript">export const requireUserSession = async () =&gt; {
  const session = await getUserSession();
  if (!session) {
    throw new Error(
      'No user session found. Please run "note setup &lt;username&gt;" first.'
    );
  }
  return session;
};
</code></pre>
<p>This enforces a business rule: “You must be logged in to logout.” We throw a specific message that tells the user exactly what to do.</p>
<p><strong>Command Layer: Catch + Present</strong></p>
<pre><code class="language-javascript">.command(
  'logout', 
  'Clear current user session', 
  {}, 
  async () =&gt; {
  try {
    // Business rule check
    const session = await requireUserSession(); 
    await clearUserSession(); // Low-level operation  
    console.log(
      `✓ Logged out ${session.username} successfully.`
    );
  } catch (error) {
    console.error('❌', error.message);
    process.exit(1);
  }
});
</code></pre>
<p>This is the one place where we actually <em>talk</em> to the user. We catch everything, show a friendly message, and exit with a non-zero code to signal failure.</p>
<p>Now it’s a true connected flow: check session → clear session → report success, with proper error handling at each layer!</p>
<h4>Best Practices Summary</h4>
<ul>
<li><p><strong>Low-level functions</strong>: throw meaningful errors with context, handle expected cases gracefully (like <code>ENOENT</code> → return success)</p>
</li>
<li><p><strong>Expected cases</strong>: don’t throw for normal situations — return appropriate values instead</p>
</li>
<li><p><strong>Business logic violations</strong>: throw with clear, actionable messages that tell users what to do next</p>
</li>
<li><p><strong>Command handlers</strong>: catch all errors in one place, present friendly feedback, and call <code>process.exit(1)</code> for failures</p>
</li>
<li><p><strong>Error messages</strong>: be specific and actionable — tell users exactly what went wrong and how to fix it</p>
</li>
<li><p><strong>Exit codes</strong>: use <code>process.exit(1)</code>so scripts and shells know something failed</p>
</li>
</ul>
<h4>Try It Out (And Stay Tuned)</h4>
<p>The error handling strategies in this post are part of a broader upgrade I’m working on for my CLI tool <a href="https://www.npmjs.com/package/czhou-notes-cli">czhou-notes-cli</a>. The current version stores notes locally and is already usable.</p>
<p>You can try it now via:</p>
<pre><code class="language-typescript">npm install -g czhou-notes-cli
</code></pre>
<p>Right now I’m actively improving it — adding things like database support and smoother command experience — and this error handling refactor is just one piece of the puzzle.</p>
<p>If you give it a try and have any ideas or suggestions, feel free to <a href="https://github.com/chloezhoudev/czhou-notes-cli/issues">open an issue</a> or just let me know — I’d love to hear your thoughts!</p>
<p>Thanks for reading 😊</p>
]]></content:encoded></item><item><title><![CDATA[My Work Laptop Broke — Here’s How I Avoided Losing Everything]]></title><description><![CDATA[Originally written in 2025. Content may vary slightly across newer versions.

A few weeks ago, something unexpected happened at work: my laptop stopped working properly, and IT told me the only option]]></description><link>https://chloezhou.dev/my-work-laptop-broke-here-s-how-i-avoided-losing-everything</link><guid isPermaLink="true">https://chloezhou.dev/my-work-laptop-broke-here-s-how-i-avoided-losing-everything</guid><category><![CDATA[Productivity]]></category><category><![CDATA[developers]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Data Backup]]></category><category><![CDATA[Career]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 04 Apr 2026 01:15:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/0472f34c-673d-41c6-8240-bd1994b4fb16.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2025. Content may vary slightly across newer versions.</p>
</blockquote>
<p>A few weeks ago, something unexpected happened at work: my laptop stopped working properly, and IT told me the only option was to reinstall the system.</p>
<p>I use a Lenovo P1 running Windows 10 at work. In our company setup, all software is managed and installed through <strong>Software Center</strong>. Normally this works fine, but in my case, Software Center stopped working properly. That meant I couldn’t install any new software — which wasn’t an immediate blocker, but it was a problem waiting to happen.</p>
<p>For about two weeks, IT and I tried different ways to fix it. We went through the usual troubleshooting, but nothing worked. Even though the issue didn’t stop me from doing my daily work, I knew I couldn’t just ignore it. Sooner or later I’d need to install something new, and without Software Center, I’d be stuck. I had to deal with it once and for all.</p>
<p>At that point, the only options were to reinstall Windows or switch to a new laptop. Reinstalling would take too long, so I chose the second option: moving to a new machine. And that meant it was time to prepare a proper backup.</p>
<h3>What I Decided to Back Up</h3>
<p>When you’re about to switch laptops, it’s surprisingly hard to know what exactly you’ll need later. This was my first time really thinking it through, and I didn’t want to miss anything important. I broke it down into a few categories:</p>
<ol>
<li><p><strong>User files</strong> — Things in Desktop and Documents. These are easy to forget but also the simplest to copy.</p>
</li>
<li><p><strong>Browser data</strong> — Mainly my Chrome bookmarks. I exported them as an <code>.html</code> file so I could import them later.</p>
</li>
<li><p><strong>VS Code setup</strong> — I backed up the entire User folder, which included themes, fonts, and terminal configs.</p>
</li>
<li><p><strong>Dev config</strong> — Just the essentials: my <code>.gitconfig</code> and SSH keys.</p>
</li>
<li><p><strong>Email folders</strong> — In Outlook, I backed up a couple of emails I considered important, and some HR/Admin-related messages.</p>
</li>
</ol>
<h3>Restoring on the New System</h3>
<p>Once I had the new laptop, I started putting everything back. Desktop and Documents were straightforward — I just copied them over, and everything ended up in the right place.</p>
<p>For Chrome, I opened the bookmarks manager and used <strong>Import Bookmarks</strong> to bring in the <code>.html</code> file I had exported. Everything was in the right folders, just like on my old laptop.</p>
<p>For VS Code, I first opened it once so that the User folder was created. Then I used the backup of my entire User folder to overwrite the new one. This restored all my settings. For extensions, I manually reinstalled them because of company restrictions.</p>
<p>For development configuration, I copied over <code>.gitconfig</code> and the <code>.ssh</code> folder, then tested my GitHub connection to make sure it worked.</p>
<p>For Outlook, I realized the backup wasn’t necessary — once I logged in, all my emails were already there.</p>
<p>Some of my project folders were large, so copying them took a few hours. In hindsight, since all the projects were already pushed to GitHub, I could have just cloned the repositories and set up the local environment, which would have been faster.</p>
<p>After finishing all of this, I checked over everything and realized the new laptop felt just like the old one. The whole restoration process was smoother and quicker than I had expected.</p>
<h3>Lesson Learned</h3>
<p>The main lesson I took away from this experience is that <strong>a little preparation goes a long way</strong>. Making sure I had all the core files backed up — like my user files, VS Code settings, and development configs — made the process smooth and predictable. Other things, like emails or project files that were already synced to the cloud, didn’t actually need to be backed up.</p>
<p>At first, I was worried about potentially losing something important and how that might affect my work. But once I went through the process, I realized that as long as I prepared properly, everything went smoothly. What initially felt like a stressful, high-stakes task turned into a manageable and controlled process.</p>
]]></content:encoded></item><item><title><![CDATA[Unlocking Seamless Web Browsing in React Native with react-native-inappbrowser-reborn]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

Are you tired of your users getting redirected to external web browsers whenever they click on links within your React Nat]]></description><link>https://chloezhou.dev/unlocking-seamless-web-browsing-in-react-native-with-react-native-inappbrowser-reborn</link><guid isPermaLink="true">https://chloezhou.dev/unlocking-seamless-web-browsing-in-react-native-with-react-native-inappbrowser-reborn</guid><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sun, 29 Mar 2026 23:53:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/36b0f76a-5af7-455d-9f83-9d0f9b34e5ca.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<p>Are you tired of your users getting redirected to external web browsers whenever they click on links within your React Native app? Say goodbye to this frustration with <code>react-native-inappbrowser-reborn</code>!</p>
<p>In this article, we’ll dive into the practical aspects of using <code>react-native-inappbrowser-reborn</code> to provide a seamless in-app browsing experience for your users.</p>
<p>By the end of this post, we’ll present a practical demo illustrating the integration of <code>react-native-inappbrowser-reborn</code> in a React Native app. What’s more, the code for this demo is readily usable in real projects — just copy and paste!</p>
<h4><strong>Introduction</strong></h4>
<p><code>react-native-inappbrowser-reborn</code> is a powerful library that <strong>allows you to open external URLs within your React Native app’s interface, keeping users engaged without disrupting their experience</strong>. Whether you’re implementing authentication flows, payment gateways, or simply opening external links shared within the app, <code>react-native-inappbrowser-reborn</code> has got you covered.</p>
<h4><strong>Getting Started</strong></h4>
<p>Getting started with <code>react-native-inappbrowser-reborn</code> is a breeze. First, install the library in your React Native project:</p>
<pre><code class="language-shell">npm install react-native-inappbrowser-reborn
</code></pre>
<p>Next, link the library to your native iOS projects:</p>
<pre><code class="language-shell">cd ios &amp;&amp; pod install &amp;&amp; cd .. # CocoaPods on iOS needs this extra step
</code></pre>
<h4><strong>Opening URLs in In-App Browser</strong></h4>
<p>Let’s dive into the most exciting part — opening external URLs within your app using <code>react-native-inappbrowser-reborn</code>. Here’s a simple example demonstrating how to open a URL in the in-app browser:</p>
<pre><code class="language-javascript">import React from 'react';
import { TouchableOpacity, Text } from 'react-native';
import InAppBrowser from 'react-native-inappbrowser-reborn';

const ExternalLinkButton = ({ url }) =&gt; {
  const handleOpenLink = async () =&gt; {
    try {
      await InAppBrowser.open(url);
    } catch (error) {
      console.error('Failed to open link:', error);
    }
  };

  return (
    &lt;TouchableOpacity onPress={handleOpenLink}&gt;
      &lt;Text&gt;Open Link&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

export default ExternalLinkButton;
</code></pre>
<h4>Customization Options</h4>
<p><code>react-native-inappbrowser-reborn</code> provides various customization options to tailor the in-app browsing experience according to your app's needs. For example, you can specify additional options such as:</p>
<ul>
<li><p>Show/hide the browser’s title.</p>
</li>
<li><p>Enable/disable navigation controls.</p>
</li>
<li><p>Customize the browser’s presentation style.</p>
</li>
</ul>
<p>Refer to the library’s <a href="https://github.com/proyecto26/react-native-inappbrowser">documentation</a> for a full list of available options and their usage.</p>
<h4>Demo: Appointments and Messages Panels</h4>
<p>Imagine you’re developing a mobile application for managing appointments and messages. You want to provide users with the ability to seamlessly access external content, such as messages from a messaging platform, without leaving the app’s interface. Let’s take a look at how this can be achieved using <code>react-native-inappbrowser-reborn</code>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/83349c18-95bd-4987-9ebb-3693848d6181.gif" alt="iOS demo" style="display:block;margin:0 auto" />

<pre><code class="language-javascript">const handleContinueToMsgPortal = async url =&gt; {
  try {
    const isAvailable = await InAppBrowser.isAvailable();
    InAppBrowser.close();

    if (isAvailable) {
      return await InAppBrowser.open(url, {
        // iOS Properties
        modalPresentationStyle: 'formSheet',
        modalEnabled: true,
        enableBarCollapsing: false,
        // Android Properties
        showTitle: false,
        enableUrlBarHiding: false,
        enableDefaultShare: false,
        showInRecents: true,
      });
    }

    return await Linking.openURL(url);
  } catch (e) {
    Alert.alert(e.toString());
  }
};
</code></pre>
<p>This code snippet defines a function <code>handleContinueToMsgPortal</code> responsible for handling the redirection of users to an external messaging platform within a React Native app.</p>
<p>Here's a breakdown of its functionality:</p>
<ol>
<li><p><em>Async Function</em>: <code>handleContinueToMsgPortal</code> is an asynchronous function that takes a URL as its parameter. It's designed to handle asynchronous operations, such as checking for the availability of the in-app browser and opening the specified URL.</p>
</li>
<li><p><em>In-App Browser Availability Check</em>: The function starts by checking if the in-app browser is available using <code>InAppBrowser.isAvailable()</code>. This method returns a boolean value indicating whether the in-app browser is available for use on the current platform.</p>
</li>
<li><p><em>Closing Existing Browser Instances</em>: If the in-app browser is available, the function proceeds to close any existing browser instances using <code>InAppBrowser.close()</code>. This ensures that any previous browser sessions are terminated before opening a new one.</p>
</li>
<li><p><em>Opening the URL</em>: If the in-app browser is available, the function attempts to open the specified URL using <code>InAppBrowser.open()</code>. It provides additional options for customizing the browser's behavior, such as setting modal presentation style on iOS and various properties on Android.</p>
</li>
<li><p><em>Fallback to Linking</em>: If the in-app browser is not available or encounters an error during the opening process, the function falls back to using <code>Linking.openURL()</code> to open the URL in the device's default external browser.</p>
</li>
<li><p><em>Error Handling</em>: The function includes a <code>try-catch</code> block to handle any errors that may occur during the process. If an error occurs, it displays an alert with the error message using <code>Alert.alert()</code>.</p>
</li>
</ol>
<img src="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/dbeb5767-926e-4fc0-a273-33331a1cc4e8.gif" alt="" style="display:block;margin:0 auto" />

<p><em>Android demo</em></p>
<p>You may notice a behavioral difference between iOS and Android: While iOS utilizes the default browser and opens up the URL in a modal, Android opens up the URL in a custom tab from Chrome, occupying the entire space.</p>
<p>Alas, don’t forget to handle errors if anything goes wrong.</p>
<h4>Conclusion</h4>
<p>With <code>react-native-inappbrowser-reborn</code>, you can elevate your React Native app's user experience by seamlessly integrating in-app browsing capabilities. From opening external links to handling authentication flows, the possibilities are endless. Say goodbye to abrupt redirects to external browsers and hello to a cohesive app experience for your users.</p>
<p>The source code can be found on <a href="https://github.com/xinyingz2/ReactNativeDemos/tree/feature/openInAppBroswer">github</a>. Happy coding! ☀️</p>
]]></content:encoded></item><item><title><![CDATA[How to Implement Swipe-to-Delete in React Native: A Step-by-Step Tutorial]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

Introduction
Swipe-to-delete functionality enhances user interaction by allowing effortless removal of items from lists wi]]></description><link>https://chloezhou.dev/how-to-implement-swipe-to-delete-in-react-native-a-step-by-step-tutorial</link><guid isPermaLink="true">https://chloezhou.dev/how-to-implement-swipe-to-delete-in-react-native-a-step-by-step-tutorial</guid><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sun, 29 Mar 2026 11:13:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/8bcb5c41-c645-41d9-a964-c4b62d1b72e2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<h4>Introduction</h4>
<p>Swipe-to-delete functionality enhances user interaction by allowing effortless removal of items from lists with a simple swipe gesture. In this tutorial, we’ll demonstrate how to achieve swipe-to-delete in React Native projects, <strong>whether set up with React Native CLI or Expo!</strong></p>
<h4>Step-by-Step Guide: React Native CLI Project</h4>
<p>Initialize a new React Native project using the React Native CLI:</p>
<pre><code class="language-shell">npx react-native init SwipeToDeleteRN
</code></pre>
<p><strong>Dependencies Installation:</strong></p>
<p>Install the required dependencies for swipe-to-delete:</p>
<pre><code class="language-shell">npm install react-native-gesture-handler
</code></pre>
<p>For iOS users, navigate to the iOS directory and run:</p>
<pre><code class="language-shell">pod install
</code></pre>
<p>In the <code>App.js</code>, wrap the app within a gesture handler root view and <em>ensure to apply</em> <code>flex: 1</code> <em>to the component:</em></p>
<pre><code class="language-javascript">import { GestureHandlerRootView } from "react-native-gesture-handler";  
  
const App = () =&gt; {  
  return (  
    &lt;GestureHandlerRootView style={{ flex: 1 }}&gt;  
      &lt;MessagesScreen /&gt;  
    &lt;/GestureHandlerRootView&gt;  
  );  
};  
  
export default App;
</code></pre>
<p>The above code snippets contain a messages screen to display a list of messages that can be swipe deleted.</p>
<p>Here is the breakdown for the <code>MessagesScreen</code> component:</p>
<pre><code class="language-javascript">import { FlatList, StyleSheet } from 'react-native';
import React, { useState } from 'react';

import Screen from '../components/Screen';
import ListItem from '../components/ListItem';
import ListItemSeparator from '../components/ListItemSeparator';
import ListItemDeleteAction from '../components/ListItemDeleteAction';

const initialMessages = [
  {
    id: 1,
    title: 'title one',
    description: 'description one',
    image: require('../assets/cat.jpg'),
  },
  {
    id: 2,
    title: 'title two',
    description: 'description two',
    image: require('../assets/cat.jpg'),
  },
  {
    id: 3,
    title: 'title three',
    description: 'description three',
    image: require('../assets/cat.jpg'),
  },
];

export default function MessagesScreen() {
  const [messages, setMessages] = useState(initialMessages);
  const [refreshing, setRefreshing] = useState(false);
  const handleDelete = message =&gt; {
    setMessages(messages =&gt; messages.filter(item =&gt; item.id !== message.id));
  };
  return (
    &lt;Screen&gt;
      &lt;FlatList
        data={messages}
        keyExtractor={item =&gt; item.id}
        renderItem={({ item }) =&gt; (
          &lt;ListItem
            title={item.title}
            subTitle={item.description}
            image={item.image}
            onPress={() =&gt; console.log('item clicked.')}
            renderRightActions={() =&gt; (
              &lt;ListItemDeleteAction onPress={() =&gt; handleDelete(item)} /&gt;
            )}
          /&gt;
        )}
        ItemSeparatorComponent={ListItemSeparator}
        refreshing={refreshing}
        onRefresh={() =&gt; {
          setMessages([
            {
              id: 1,
              title: 'title one',
              description: 'description one',
              image: require('../assets/cat.jpg'),
            },
          ]);
        }}
      /&gt;
    &lt;/Screen&gt;
  );
}

const styles = StyleSheet.create({});
</code></pre>
<p>In this component:</p>
<ul>
<li><p>We define a state variable <code>messages</code> to hold the list of messages. Initially, it contains some dummy messages.</p>
</li>
<li><p>We define a function <code>handleDelete</code> to remove a message from the list when the delete action is triggered.</p>
</li>
<li><p>Inside the <code>FlatList</code>, each message is rendered using the <code>ListItem</code> component, which displays the title, description, and an image.</p>
</li>
<li><p>The <code>ListItemDeleteAction</code> component is used to render the delete button, which triggers the <code>handleDelete</code> function when pressed.</p>
</li>
<li><p>The <code>renderRightActions</code> prop of <code>ListItem</code> component renders the delete action using the <code>ListItemDeleteAction</code> component.</p>
</li>
</ul>
<p>In the <code>ListItem</code> component, import <code>Swipeable</code> from <code>react-native-gesture-handler/Swipeable</code> and wrap it around the entire component while passing the <code>renderRightActions</code> prop:</p>
<pre><code class="language-javascript">import { Image, StyleSheet, TouchableHighlight, View } from 'react-native';
import React from 'react';
import Swipeable from 'react-native-gesture-handler/Swipeable';
import AppText from './AppText';
import Colors from '../themes/Colors';

export default function ListItem({
  title,
  subTitle,
  image,
  onPress,
  renderRightActions,
}) {
  return (
    &lt;Swipeable renderRightActions={renderRightActions}&gt;
      &lt;TouchableHighlight underlayColor={Colors.lightGrey} onPress={onPress}&gt;
        &lt;View style={styles.container}&gt;
          &lt;Image source={image} style={styles.image} /&gt;
          &lt;View style={styles.detailsContainer}&gt;
            &lt;AppText style={styles.title}&gt;{title}&lt;/AppText&gt;
            &lt;AppText style={styles.subTitle}&gt;{subTitle}&lt;/AppText&gt;
          &lt;/View&gt;
        &lt;/View&gt;
      &lt;/TouchableHighlight&gt;
    &lt;/Swipeable&gt;
  );
}

const styles = StyleSheet.create({
  container: {
    flexDirection: 'row',
    padding: 15,
  },
  image: {
    borderRadius: 35,
    width: 70,
    height: 70,
    marginRight: 10,
  },
  detailsContainer: {
    justifyContent: 'center',
  },
  title: {
    textTransform: 'capitalize',
    fontWeight: '500',
    paddingVertical: 5,
  },
  subTitle: {
    fontSize: 16,
    color: Colors.mediumGrey,
  },
});
</code></pre>
<p>The <code>renderRightActions</code> prop should be a function that returns a React component (ie: <code>ListItemDeleteAction</code>) representing the delete UI.</p>
<pre><code class="language-javascript">import { StyleSheet, TouchableWithoutFeedback, View } from 'react-native';
import React from 'react';
import Colors from '../themes/Colors';
import Icon from 'react-native-vector-icons/AntDesign';

export default function ListItemDeleteAction({ onPress }) {
  return (
    &lt;TouchableWithoutFeedback onPress={onPress}&gt;
      &lt;View style={styles.container}&gt;
        &lt;Icon name="delete" color={Colors.white} size={30} /&gt;
      &lt;/View&gt;
    &lt;/TouchableWithoutFeedback&gt;
  );
}

const styles = StyleSheet.create({
  container: {
    backgroundColor: Colors.dangerRed,
    width: 70,
    justifyContent: 'center',
    alignItems: 'center',
  },
  icon: {
    color: Colors.white,
  },
});
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/3acc3ea4-d1bb-49cd-af21-22207ca3ebc9.gif" alt="iOS" style="display:block;margin:0 auto" />

<p>The source code can be found <a href="https://github.com/xinyingz2/ReactNativeDemos/tree/feature/swipeToDelete">here</a> on GitHub.</p>
<h4>Step-by-Step Guide: Expo Project</h4>
<p>Initialize a new Expo project:</p>
<pre><code class="language-shell">expo init SwipeToDeleteExpo
</code></pre>
<p><strong>Install Dependencies:</strong></p>
<p>Add the required dependencies for swipe-to-delete:</p>
<pre><code class="language-shell">expo install react-native-gesture-handler
</code></pre>
<p><strong>Implement Swipe-to-Delete:</strong></p>
<p>Follow the same steps as the React Native CLI project to implement swipe-to-delete using the <code>Swipeable</code> component. And <strong>DO NOT FORGET</strong> to import in your project’s entry file (e.g., <code>App.js</code>). Otherwise <code>Swipeable</code> won’t work!</p>
<p><strong>Testing:</strong></p>
<p>Run your Expo project using the Expo client app or on a physical device to test swipe-to-delete functionality.</p>
<h4>Conclusion</h4>
<p>By following the provided steps, you can seamlessly integrate swipe-to-delete functionality into your React Native projects, regardless of whether they are set up with React Native CLI or Expo. Enhance user experience and interaction by enabling users to efficiently manage lists with intuitive swipe gestures.</p>
<p>We encourage you to run the provided source code on your development environment. By doing so, you’ll gain a deeper understanding of how swipe-to-delete works in practice and how you can customize it to fit your specific project requirements.</p>
<p>Happy coding! ☀️</p>
]]></content:encoded></item><item><title><![CDATA[Simplifying File Download and Viewing in React Native Apps]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

What is React Native, and how does it facilitate cross-platform development?
React Native enables developers to build mobi]]></description><link>https://chloezhou.dev/simplifying-file-download-and-viewing-in-react-native-apps</link><guid isPermaLink="true">https://chloezhou.dev/simplifying-file-download-and-viewing-in-react-native-apps</guid><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sun, 29 Mar 2026 05:58:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/e5fa365f-e646-419d-aa72-77263ebfa8d8.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<h4>What is React Native, and how does it facilitate cross-platform development?</h4>
<p>React Native enables developers to build mobile applications for both iOS and Android platforms using a single codebase.</p>
<h4>Why is file management important in React Native development?</h4>
<p>File management is crucial for many applications, such as handling downloads and opening files. Implementing these features requires consideration of platform-specific differences and permissions.</p>
<h4>What tools or library should be used?</h4>
<p>When it comes to file operations, the <code>react-native-fetch-blob</code> library offers robust capabilities. It provides a comprehensive set of functions for handling file downloads, uploads, and other file-related tasks in React Native applications.</p>
<h4>File Download</h4>
<p>Key considerations:</p>
<ul>
<li><p>Choose an appropriate directory based on the platform.</p>
</li>
<li><p>Construct the file path using a unique identifier.</p>
</li>
<li><p>Execute platform-specific download logic, handling permissions and errors accordingly.</p>
</li>
</ul>
<pre><code class="language-javascript">import { PermissionAndroid, Platform, Alert } from "react-native";
import DeviceInfo from "react-native-device-info";
import ReactNativeBlobUtil from "react-native-blob-util";
import FileViewer from "react-native-file-viewer";
import _ from "lodash";

const downloadPdfToFileSystem = async (base64, fileId) =&gt; {
  const IS_IOS = Platform.OS === "ios";
  const {
    dirs: { DownloadDir, DocumentDir },
  } = ReactNativeBlobUtil.fs;
  const dirs = IS_IOS ? DocumentDir : DownloadDir;
  const path = dirs + '/' + fileId + '.pdf';

  try {
    if (!IS_IOS) {
      const systemVersion = DeviceInfo.getSystemVersion();
      const isDownloadAllowed = await PermissionAndroid.request(
        PermissionAndroid.PERMISSIONS.WRITE_EXTERNAL_STORAGE
      );

      if (
        systemVersion &gt;= "11.0" ||
        isDownloadAllowed === PermissionAndroid.RESULTS.GRANTED
      ) {
        await ReactNativeBlobUtil.fs.writeFile(path, base64, "base64");
      }

      return path;
    }

    const ReactNativeBlobConfigs = { fileCache: true, path };
    const fileUrl = 'data:application/pdf;base64,' + base64;
    const response = await ReactNativeBlobUtil.config(
      ReactNativeBlobConfigs
    ).fetch("GET", fileUrl, {});

    return response.path();
  } catch (e) {
    console.log("PDF error ", e);
    Alert.alert(e.toString());
  }
};
</code></pre>
<p>To streamline file downloads, I’ve implemented a function called <code>downloadPdfToFileSystem</code>, which facilitates downloading PDF data in base64 format and storing it in the device’s file system.</p>
<p>Here’s a breakdown of the process:</p>
<p>1. Choose an appropriate directory based on the platform: Use <code>DocumentDir</code> for iOS and <code>DownloadDir</code> for Android.</p>
<p>2. Construct the file path using a unique identifier, <code>fileId</code>.</p>
<p>3. Execute platform-specific download logic:</p>
<p><strong>For Android</strong>: Check the device’s system version. If it’s 11.0 or higher, no need to request WRITE_EXTERNAL_STORAGE permission; proceed with file writing. Otherwise, request permission using <code>PermissionAndroid.request</code>.</p>
<p><strong>For iOS:</strong> Configure file cache and path using <code>ReactNativeBlobUtil.config</code>; Use the <code>fetch</code> method to retrieve the PDF file from base64 data and save it.</p>
<p>4. Handle errors gracefully: Log error messages to the console and display an alert dialog if any issues arise.</p>
<p>☀️ Highlights:</p>
<ul>
<li><strong>Choice of Directories:</strong></li>
</ul>
<p><code>Documents</code> directory in iOS and <code>Downloads</code> in Android: Selected based on platform differences, permission management, and user experience considerations.</p>
<p><strong>Platform Differences:</strong> iOS and Android platforms have different file system structures. iOS apps typically save user-generated files in the <code>Documents</code> directory, while Android apps tend to save downloadable files in the external storage’s <code>Downloads</code> directory.</p>
<p><strong>Permission Management:</strong> Accessing external storage on Android usually requires dynamically requesting permissions, especially after Android 11 (API level 30), which imposes stricter restrictions on file access. In contrast, iOS is relatively more lenient in terms of file access permissions and does not require explicit permission requests.</p>
<p><strong>User Experience:</strong> Saving downloaded files in familiar locations can enhance user experience. For example, on Android devices, saving files to the <code>Downloads</code> directory allows users to easily locate downloaded files in the system file manager or other apps.</p>
<ul>
<li><strong>Understanding</strong> <code>PermissionAndroid.request</code></li>
</ul>
<p>This function requests write access to external storage on Android. It’s an asynchronous operation, awaiting the user’s response, which can be granted, denied, or permanently denied (“never ask again”).</p>
<p>Based on the user’s response, developers can take appropriate actions, such as proceeding with the file writing operation or providing relevant prompts when the user denies the permission.</p>
<ul>
<li><strong>Data URL Explanation</strong></li>
</ul>
<p>A data URL is a special format containing Base64-encoded data. In this case, it specifies the MIME type (“application/pdf”) and the encoded PDF content, guiding <code>ReactNativeBlobUtil</code> on data handling.</p>
<ul>
<li><strong>Handling Image Downloads</strong></li>
</ul>
<p>For images, modify the MIME type to “image/jpeg” or “image/png” and replace “base64” with image data. This ensures proper handling by <code>ReactNativeBlobUtil</code>.</p>
<h4>Check File Existence</h4>
<pre><code class="language-javascript">const checkFileExistence = async (fileId) =&gt; {
  const IS_IOS = Platform.OS === "ios";
  const {
    dirs: { DownloadDir, DocumentDir },
  } = ReactNativeBlobUtil.fs;
  const dirs = IS_IOS ? DocumentDir : DownloadDir;
  const path = dirs + '/' + fileId + '.pdf';

  try {
    const isExist = await ReactNativeBlobUtil.fs.exists(path);
    return { isExist, path };
  } catch (e) {
    console.log("check file existence error ", e);
    Alert.alert(e.toString());
  }
};
</code></pre>
<p>This function checks the existence of a file by constructing the file path based on the provided <code>fileId</code>.</p>
<p>It uses the <code>ReactNativeBlobUtil.fs.exists</code> method to check if the file exists at the specified path. The result is stored in the <code>isExist</code> variable.</p>
<p>If any errors occur during the process, such as invalid file paths or permission issues, they are caught in the<code>catch</code> block. The error message is logged to the console using <code>console.log</code>and displayed as an alert using <code>Alert.alert</code>.</p>
<h4>Opening Downloaded Files</h4>
<pre><code class="language-javascript">const openExistingFile = async (path, isInModalOrAlerts = false) =&gt; {
  const IS_IOS = Platform.OS === "ios";

  try {
    if (!IS_IOS) {
      return await ReactNativeBlobUtil.android.actionViewIntent(
        path,
        "application/pdf"
      );
    }

    // ReactNativeBlobUtil.ios.openDocument does not work with Modal and Alerts components
    // Reference: https://github.com/joltup/rn-fetch-blob/issues/243
    if (isInModalOrAlerts) {
      await FileViewer.open(path);
    } else {
      await ReactNativeBlobUtil.ios.previewDocument(path);
    }
  } catch (e) {
    console.log("open file error: ", e);
    Alert.alert(e.toString());
  }
};
</code></pre>
<p>On the Android platform, the <code>ReactNativeBlobUtil.android.actionViewIntent</code> method is employed to open the file. This method sends an action view intent, prompting the system to open the specified file.</p>
<p>On iOS, two different methods are used:</p>
<p>1. If the <code>isInModalOrAlerts</code> parameter is true, indicating that the current application context is within a modal dialog or alert, the <code>FileViewer.open</code> method is invoked to open the file. This is because the <code>ReactNativeBlobUtil.ios.previewDocument</code> method <em>may not function properly within modal dialogs or alerts</em> due to certain limitations.</p>
<p>2. If the <code>isInModalOrAlerts</code> parameter is false, indicating that the current application context is not within a modal dialog or alert, the <code>ReactNativeBlobUtil.ios.previewDocument</code> method is directly called to open the file.</p>
<p>In case of any errors, the error messages are logged using <code>console.log</code> and displayed using the <code>Alert.alert</code> method.</p>
<p>By following these guidelines, developers can simplify file download and viewing in React Native apps, ensuring smooth user experiences across iOS and Android platforms.</p>
]]></content:encoded></item><item><title><![CDATA[Handling Timezone Conversion and Sorting in a React Native App]]></title><description><![CDATA[Originally written in 2024. Content may vary slightly across newer versions.

Managing timezone conversion is crucial in React Native development to ensure accurate time information for users across v]]></description><link>https://chloezhou.dev/handling-timezone-conversion-and-sorting-in-a-react-native-app</link><guid isPermaLink="true">https://chloezhou.dev/handling-timezone-conversion-and-sorting-in-a-react-native-app</guid><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Frontend Development]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sun, 29 Mar 2026 05:22:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/cc514c3c-6012-43a6-bb85-4de85d603d65.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2024. Content may vary slightly across newer versions.</p>
</blockquote>
<p>Managing timezone conversion is crucial in React Native development to ensure accurate time information for users across various timezones. This article addresses the common challenge faced by developers when dealing with timezone conversion and sorting, particularly in scenarios like appointment scheduling.</p>
<p><strong>Project Requirement Scenario:</strong></p>
<p>Imagine a React Native app with an appointment scheduling module. The app needs to display a list of upcoming appointments, each showing the date and time of the appointment. However, the appointment data is stored in different timezones and must be converted to the user’s timezone for accurate display. Additionally, appointments need to be sorted chronologically, irrespective of their original timezones.</p>
<p><strong>Solution Overview:</strong></p>
<p>To meet these requirements, we’ll implement timezone conversion and sorting functionality using several tools and libraries, including <a href="https://github.com/marnusw/date-fns-tz"><strong>date-fns-tz</strong></a>, <a href="https://www.npmjs.com/package/react-native-localize"><strong>react-native-localize</strong></a>, and <a href="https://lodash.com/docs/"><strong>lodash</strong></a>. Our solution will be scoped to American timezones for simplicity.</p>
<p><strong>Timezone Conversion Steps:</strong></p>
<ul>
<li><p><strong>Obtain date-time and timezone info</strong>: Retrieve original date-time and timezone data from appointments.</p>
</li>
<li><p><strong>Validate timezone</strong>: Ensure timezone is valid and within American timezones.</p>
</li>
<li><p><strong>Execute conversion</strong>: Use <code>zonedTimeToUtc</code> and <code>utcToZonedTime</code> to convert date-time from original to user’s local timezone.</p>
</li>
<li><p><strong>Handle errors</strong>: Gracefully manage any conversion errors, e.g., invalid timezones.</p>
</li>
</ul>
<pre><code class="language-javascript">import { zonedTimeToUtc, utcToZonedTime } from 'date-fns-tz';
import { getTimeZone } from 'react-native-localize';

// Retrieve original date-time and timezone info from appointment data
const appointmentTime = '2023-02-20T08:00:00';
const appointmentTimeZone = 'America/New_York';

// Validate timezone information
const isValidTimeZone = isValidTimeZone(appointmentTimeZone);

if (isValidTimeZone) {
  try {
    // Get user's local timezone
    const userTimeZone = getTimeZone();

    // Execute timezone conversion
    const utcTime = zonedTimeToUtc(appointmentTime, appointmentTimeZone);
    const localTime = utcToZonedTime(utcTime, userTimeZone);

     // Retrieve timezone abbreviation for user's timezone
     const timeZoneAbbreviation = new Intl.DateTimeFormat('en-US', {
      timeZone: userTimeZone,
      timeZoneName: 'short'
    }).format(localTime).split(' ')[1]

    console.log('Converted appointment time:', localTime);
  } catch (error) {
    console.error('Error converting timezone:', error.message);
  }
} else {
  console.error('Invalid timezone:', appointmentTimeZone);
}

// Validate timezone is valid
function isValidTimeZone(timezone) {
  // Implement validation logic to ensure timezone is within valid range
  // Return true or false
}
</code></pre>
<p><strong>Sorting Appointments:</strong></p>
<ul>
<li>Employ lodash’s <strong>orderBy</strong> function to sort the appointments in descending order based on their date-time values.</li>
</ul>
<pre><code class="language-javascript">import _ from 'lodash'

const sortedAppointments = _.orderBy(appointments, ['localTime'], ['desc'])
</code></pre>
<p>By following this approach, React Native developers can effectively handle timezone conversion and sorting in their projects, ensuring a seamless user experience across different timezones.</p>
<blockquote>
<p>Feel free to share your experiences, thoughts, or challenges related to timezone handling in mobile apps in the comments section below.</p>
</blockquote>
<p><strong>😊 Fun Fact</strong></p>
<p>In our appointment scheduling scenario, each appointment is stored using different timezones.</p>
<p><strong>Interestingly, if the sorting is based solely on dates and doesn’t consider the time component, there’s no need for timezone conversion.</strong></p>
<p>Dates, unlike times, are inherently timezone-agnostic. Regardless of your location, a specific date remains the same. Therefore, timezone conversion is only necessary when dealing with time-specific information. Since times can vary across different timezones, it’s crucial to handle timezone conversion appropriately to ensure accurate time representation in your React Native app.</p>
]]></content:encoded></item><item><title><![CDATA[Conditional Rendering in React/React Native — Examples]]></title><description><![CDATA[Originally written in 2023. Content may vary slightly across newer versions.

This article is part of a series that document my first react native project. They encompass a variety of topics, and come]]></description><link>https://chloezhou.dev/conditional-rendering-in-react-react-native-examples</link><guid isPermaLink="true">https://chloezhou.dev/conditional-rendering-in-react-react-native-examples</guid><category><![CDATA[React]]></category><category><![CDATA[React Native]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sun, 22 Mar 2026 09:24:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/f2709052-e04e-4e20-a8f6-1385b6600e9e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Originally written in 2023. Content may vary slightly across newer versions.</p>
</blockquote>
<p><em>This article is part of a series that document my first react native project. They encompass a variety of topics, and come in different shapes and flavours — some are hands-on notes, some are best practices (learn-and-think products by me), and some are an endeavour to get an in-depth understanding of the underlying mechanisms of how React / React Native works.</em></p>
<p>It is almost impossible not to use conditional rendering in a React / React Native app. Therefore, it is crucial for developers to understand the different scenarios in which conditional rendering is required and which kind of conditional rendering is most appropriate for the current situation.</p>
<h4>The fundamentals — Part 1</h4>
<p>Let's go over some fundamental knowledge about rendering in JSX. According to the <a href="https://legacy.reactjs.org/docs/jsx-in-depth.html#booleans-null-and-undefined-are-ignored">React docs on JSX in depth</a>, <strong>JSX expressions include Booleans, Null, and Undefined are ignored during rendering — they simply don't render.</strong></p>
<p>Therefore, we are clear on the fact that if we want to conditionally render a component or for a component to skip rendering, we need to think about returning one of the above mentioned values instead.</p>
<h4>The fundamentals — Part 2</h4>
<p>There is a whole section dedicated to <a href="https://react.dev/learn/conditional-rendering#conditionally-returning-nothing-with-null">conditional rendering in the React docs</a>, so I'd like to skip the reinventing-the-wheel play, and give you a few minutes to refresh your memory by quickly skimming through it.</p>
<p>I will go over each conditional rendering technique with a real-project-example and explains why a specific technique is suitable for the current situation.</p>
<h4>Use Logical AND operator (&amp;&amp;)</h4>
<p>Before jumping into the hands-on part, I’d like to say I'm very much inspired by <a href="https://medium.com/@danajanoskova">Dana Janoskova</a> in her article about how to write clean JSX and components. She talks about the approach to figure out which props are required and which are not. The basic idea is that you need to think about this question: what does this component do? And which props are necessary for the rendering of this component. In other words, without this prop / piece of data, the existence of this component is meaningless.</p>
<p>Equipped with this mindset, I set out to think about the purpose of the LocationItem component. This component is a modal that contains the detailed information about a certain location. When the user clicks on a LocationMarker component, this modal will show. <strong>So the most important part of this component or prop is the location prop</strong>.</p>
<p>In here, I want to use &amp;&amp; operator because 1. isShowItem and !_isEmpty(location) both are required to be satisfied to render the LocationItem; 2. if both return true, then the LocationItem renders. If not, the expression returns false, which is also fine and renders nothing.</p>
<p>However, there is a pitfall you might want to pay attention to (because I did not so I’m writing this piece to remind myself not to repeat it). <strong>Avoid using empty strings or 0 in the conditions. Otherwise, it renders the empty strings or 0 !! And if you came across this, it throws an error: put text inside</strong> <code>&lt;Text /&gt;</code><strong>.</strong></p>
<pre><code class="language-javascript">import React, { useState, useCallback } from 'react'
import _ from 'lodash'
import { Marker } from 'react-native-maps'

import LocationItem from 'Locations/LocationItem'
import styles from './styles'

const LocationMarker = ({ latitude, longitude, location }) =&gt; {
  const [isShowItem, setIsShowItem] = useState(false)
  
  const handleMarkerPress = useCallback(() =&gt; setIsShowItem(true))
  
  return (
    &lt;Marker coordinate={{ latitude, longitude }} onPress={handleMarkerPress}&gt;
    {isShowItem &amp;&amp; !_.isEmpty(location) &amp;&amp; (
      &lt;LocationItem
        isShowItem={isShowItem}
        setIsShowItem={setIsShowItem}
        location={location}
      /&gt;
    )}
    &lt;/Marker&gt;
  )
}
</code></pre>
<h4>Use If to conditionally returning JSX</h4>
<p>If this component needs to go into loading before rendering, it is more suitable to use if to conditionally returning JSX and avoid nesting at the same time (also inspired by Dana).</p>
<pre><code class="language-javascript">import React, { useState, useCallback } from 'react'
import _ from 'lodash'
import { Marker } from 'react-native-maps'

import LocationItem from 'Locations/LocationItem'
import styles from './styles'

const LocationMarker = ({ latitude, longitude, location }) =&gt; {
  const [isShowItem, setIsShowItem] = useState(false)
  
  const handleMarkerPress = useCallback(() =&gt; setIsShowItem(true))
  
  if (!longitude || !latitude) {
    return null
  }
  
  return (
    &lt;Marker coordinate={{ latitude, longitude }} onPress={handleMarkerPress}&gt;
    {isShowItem &amp;&amp; !_.isEmpty(location) &amp;&amp; (
      &lt;LocationItem
        isShowItem={isShowItem}
        setIsShowItem={setIsShowItem}
        location={location}
      /&gt;
    )}
    &lt;/Marker&gt;
  )
}
</code></pre>
<p>Using the same example as before, I only added the if condition for longitude &amp; latitude. This component only needs to render if it has valid longitude and latitude values.</p>
<p>To sum up, conditional rendering is a must if you are developing using React or React Native. Hope a few tips can help you a long way!</p>
]]></content:encoded></item><item><title><![CDATA[Why My Model Wouldn’t Deploy to Hugging Face Spaces (and What Git LFS Actually Does)]]></title><description><![CDATA[I trained a simple “is cat” image classifier using fastai and wanted to deploy a small demo on Hugging Face Spaces. I already had a working app.py and a trained model.pkl, so my plan felt straightforw]]></description><link>https://chloezhou.dev/why-my-model-wouldn-t-deploy-to-hugging-face-spaces-and-what-git-lfs-actually-does</link><guid isPermaLink="true">https://chloezhou.dev/why-my-model-wouldn-t-deploy-to-hugging-face-spaces-and-what-git-lfs-actually-does</guid><category><![CDATA[Git]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Python]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 21 Mar 2026 08:36:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/ce1dd556-5f56-4958-852e-b08642384138.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I trained a simple <strong>“is cat” image classifier</strong> using <a href="https://docs.fast.ai/quick_start.html">fastai</a> and wanted to deploy a small demo on <a href="https://huggingface.co/spaces">Hugging Face Spaces</a>. I already had a working <code>app.py</code> and a trained <code>model.pkl</code>, so my plan felt straightforward: commit everything and push it to the remote Hugging Face repository.</p>
<p>At that point, I thought the hard part was already over. The model was trained, the demo worked locally, and deployment felt like it should be a routine step — commit, push, done.</p>
<p>That assumption didn’t last long.</p>
<h3>What I Tried (and Why It Kept Failing)</h3>
<p>When I tried to push the repository, Git rejected it with the following error:</p>
<pre><code class="language-bash">remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains files larger than 10 MiB.
remote: Please use https://git-lfs.github.com/ to store large files.
remote: See also: https://hf.co/docs/hub/repositories-getting-started#terminal
remote:
remote: Offending files:
remote:   - model.pkl (ref: refs/heads/main)
remote: -------------------------------------------------------------------------
To https://huggingface.co/spaces/chloezhoudev/minima
 ! [remote rejected] main -&gt; main (pre-receive hook declined)
error: failed to push some refs to 'https://huggingface.co/spaces/chloezhoudev/minima'
</code></pre>
<p>I could see what Git was complaining about — <code>model.pkl</code> was too large — but I didn’t really understand <em>why</em> this was a problem, or what “<a href="https://git-lfs.com/">Git LFS</a>” actually meant in practice. I had never used it before.</p>
<p>So I followed the instructions in the error message and tried to fix it step by step.</p>
<p>First, I installed Git LFS and set it up locally:</p>
<pre><code class="language-bash">brew install git-lfs    # Install Git LFS (macOS via Homebrew)
git lfs install         # Initialize Git LFS and register Git hooks
git lfs version         # Confirm Git LFS installation
</code></pre>
<p>Then I told Git LFS to track my model file and committed it:</p>
<pre><code class="language-bash">git lfs track "model.pkl"
git add model.pkl
git commit -m "Track model file with Git LFS"
git push
</code></pre>
<p>The push failed again — with the <strong>exact same error</strong>.</p>
<p>At this point, I was getting annoyed. Instead of really understanding how Git LFS worked, I tried to brute-force my way through the problem and asked ChatGPT for help. It suggested running the following command:</p>
<pre><code class="language-bash">git rm --cached model.pkl
</code></pre>
<p>Then tracking the file with Git LFS again and committing:</p>
<pre><code class="language-bash">git lfs track "model.pkl"
git commit -m "Store model file using Git LFS"
git push
</code></pre>
<p>Third attempt — same error. 😅</p>
<p>At this point, I seriously considered giving up. But since I already understood <em>what</em> I wanted to do, I felt I should at least understand <em>why</em> this wasn’t working. So I went back to ChatGPT one more time, and this time it suggested something very different:</p>
<pre><code class="language-bash">git checkout --orphan clean-main
git add .
git commit -m "Initial deployment with Git LFS model"

git branch -M main
git push -f origin main
</code></pre>
<p>This time, the push finally worked.</p>
<p>Now that everything was deployed successfully, it was time to stop guessing and actually understand what had happened:</p>
<ul>
<li>Why did Git keep rejecting my pushes?</li>
<li>What does Git LFS <em>actually</em> do?</li>
<li>Why didn't <code>git rm --cached</code> help?</li>
<li>And why did creating an orphan branch fix everything?</li>
</ul>
<h3>What I Eventually Learned About Git and Git LFS</h3>
<p>The first problem was that I didn’t actually understand <em>why</em> my push was rejected. Reading the error alone wasn’t enough — I just saw a message telling me to use Git LFS. It wasn’t until later that I learned Hugging Face repos enforce a <strong>pre-receive hook</strong> on their side that scans <em>all commits being push to the remote</em> (any commits not already on Hugging Face's servers) and <strong>rejects the push if any commit contains a file larger than 10 MiB</strong>.</p>
<p>Once that clicked, I began to think about how Git actually stores files and what role Git LFS plays. The simplest way to think about it is this:</p>
<ul>
<li><p>A normal Git repository stores file contents as <strong>blob objects</strong> — each file you add is stored roughly at its original size in the history. For binary files, this is exactly what happens: Git takes the content and writes a blob object containing that data.</p>
</li>
<li><p>Git LFS replaces large files with <strong>pointer files</strong> and stores the actual big file contents separately on a dedicated LFS server. When you git add a file while Git LFS is enabled, Git LFS generates a small pointer and hands that pointer file over to Git to store in the repository. The large file itself gets uploaded to the Git LFS store instead.</p>
</li>
</ul>
<blockquote>
<p>💡 <strong>Key takeaway</strong></p>
<p>So in my case, <code>model.pkl</code> was a large binary file.<br />Without Git LFS installed <em>before it was ever added</em>, Git simply stored it as a normal blob at full size.<br />That’s why my first push was rejected — the blob itself exceeded Hugging Face’s 10 MiB limit.</p>
</blockquote>
<p>The next piece of the puzzle was the <code>.gitattributes</code> file that Hugging Face includes automatically when you create a Space repository. By default it contains a line like this:</p>
<pre><code class="language-bash">*.pkl filter=lfs diff=lfs merge=lfs -text
</code></pre>
<p>This line tells Git which files <strong>should</strong> be tracked by Git LFS instead of by normal Git. However, for this to work, you must have Git LFS installed <em>and</em> initialized (<code>git lfs install</code>) <strong>before</strong> adding the file. Since I didn't have Git LFS set up when I first committed <code>model.pkl</code>, that rule had no effect.</p>
<p>Which brings us to the next question: what happened on the second push?</p>
<p>Even after I installed Git LFS and tracked the model file, the push was rejected again.</p>
<blockquote>
<p>💡 <strong>Key takeaway</strong></p>
<p>Installing Git LFS does not fix files that were already committed.<br />The old commit containing <code>model.pkl</code> as a normal blob was still in the history, and Hugging Face rejected the push because of it.</p>
</blockquote>
<p>What about <code>git rm — cached</code>?</p>
<blockquote>
<p>⚠️ <strong>Important</strong></p>
<p><code>git rm --cached</code> only affects future commits. It does <strong>not</strong> remove files from your existing Git history.</p>
</blockquote>
<p>The command removes the file from the staging area and stops tracking it in upcoming commits, while leaving the file intact in your working directory. However, it does nothing to delete the earlier commit where the large file was first added.</p>
<p>Because that original commit still contained <code>model.pkl</code> as a normal Git blob, the problematic file remained in the repository history — and Hugging Face continued to reject the push.</p>
<p>Finally, creating a new branch with no history (<code>git checkout — orphan</code>) fixed everything because it <strong>started with a clean slate</strong> that had no commits at all.</p>
<p>Once I added the files with Git LFS already configured, committed them, renamed <code>clean-main</code> to <code>main</code> (using <code>git branch -M main</code>), and force-pushed, the remote accepted it. There were no old blob objects in the history for the pre-receive hook to reject.</p>
<p>One more warning: using <code>— orphan</code> and especially <code>git push -f</code> is dangerous if multiple collaborators are using the same branch, because this <strong>permanently deletes all previous commits</strong> from the remote and can break everyone else's local copies. In my case, the Space repo was just for deployment, so this was fine, but it’s something to be careful about in team settings.</p>
<blockquote>
<p>💡 <strong>Bonus: Git LFS isn’t the only option anymore</strong></p>
<p>Hugging Face now also recommends <a href="https://huggingface.co/docs/hub/en/xet/index"><strong>git-xet</strong></a>, a newer backend designed specifically for large machine learning artifacts.</p>
<p>Both Git LFS and git-xet store large file content separately from Git's object database. The difference is that Git LFS stores complete files, while git-xet uses chunking and deduplication to handle incremental changes more efficiently — particularly useful for ML models that evolve over time.</p>
</blockquote>
<p>After all that debugging, the model is finally deployed and working as expected.</p>
<p>You can try the live demo <a href="https://huggingface.co/spaces/chloezhoudev/minima">here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[How I Built Schedulicious: A Meal Planning Web App]]></title><description><![CDATA[How it Started
I joined Chigu during my job search because I wanted to stay productive, sharpen my skills, and gain more hands-on experience working in a team.
Job hunting can be a unpredictable proce]]></description><link>https://chloezhou.dev/how-i-built-schedulicious-a-meal-planning-web-app</link><guid isPermaLink="true">https://chloezhou.dev/how-i-built-schedulicious-a-meal-planning-web-app</guid><category><![CDATA[React]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[webdev]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 21 Mar 2026 08:29:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/1477c193-8c0e-413b-b88f-61055630d4ed.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>How it Started</h3>
<p>I joined <a href="https://www.chingu.io/">Chigu</a> during my job search because I wanted to stay productive, sharpen my skills, and gain more hands-on experience working in a team.</p>
<p>Job hunting can be a unpredictable process, and instead of waiting around, I saw Chigu as the perfect opportunity to build something meaningful while collaborating with others.</p>
<p>Looking back, it turned out to be one of the best decisions I made—not only did I improve my technical and teamwork skills, but I also got to experience the full process of bringing a project from 0 to 1.</p>
<h3>What We Built</h3>
<p>The project is a <strong>meal planning tool</strong> designed to help managers efficiently create weekly menus for employees.</p>
<h4>Key Features:</h4>
<ul>
<li><p><strong>One-Click Menu Generation</strong> – Instead of manually selecting meals, managers can generate a balanced menu with a single click.</p>
</li>
<li><p><strong>No Repeated Meals</strong> – The system ensures that each day’s dish is unique, preventing repetition throughout the week.</p>
</li>
<li><p><strong>Easy Regeneration</strong> – If the results aren’t satisfactory, a “regenerate” button allows them to create a new menu instantly.</p>
</li>
<li><p><strong>Export Functionality</strong> – Users can save and track meal plans in PDF or Excel format for easy reference.</p>
</li>
</ul>
<h3>From 0 to 1</h3>
<p>Each team receives a product spec with predefined features, but it’s up to us to break them down, plan the implementation, and bring the product to life.</p>
<h4>Leading the Project</h4>
<p>Our team was unique—no project manager, product owner, or UI/UX designer—just developers. Seeing the opportunity to lead in the absence of a project manager, I volunteered to take on the role. It felt like a perfect challenge to organize the team and drive the project forward.</p>
<p>I kicked off the project by planning the initial meeting: setting an agenda, defining each team member’s tasks, and ensuring we were aligned on the outcomes. Thanks to this preparation, the meeting was productive, and we defined our MVP features and roles for the upcoming sprint.</p>
<h4>Building the Product Backlog</h4>
<p>In sprint 2, I took on the responsibility of creating a Product Backlog, which is usually handled by a Product Owner. I broke down the MVP features into epics, user stories, and tasks, creating templates to ensure everyone was aligned.</p>
<p>This process made me realize how essential the backlog is as a roadmap for the team. It wasn’t easy—understanding each feature, defining clear acceptance criteria, and avoiding repetition was a challenge. But as I worked through it, I gained a clearer understanding of how each feature impacts the end user.</p>
<p>One instance where I exercised product thinking was when I adjusted the feature flow based on my own understanding of user needs and how they would interact with the product to ensure a smooth experience.</p>
<p>Unlike in more structured environments where backlogs are typically fixed, I had the freedom to adapt the “HOW” during development, tailoring the features to better suit user needs. This process was both challenging and rewarding, sharpening my product thinking and development skills.</p>
<h3>The Tech Stack</h3>
<p>For this project, we didn’t have to worry about a backend since we were working solely with a dish API. Therefore, I chose <strong>React</strong> as the core of the tech stack, paired with <strong>Vite</strong> as the build tool to ensure fast development and smooth hot reloading.</p>
<p>For state management, I initially considered <strong>Context API</strong> and <strong>Zustand</strong>, ultimately choosing <strong>Zustand</strong> for the following reasons:</p>
<ul>
<li><p>Our use case was simple but involved sharing state across multiple routes (e.g., allergies and weekly menus).</p>
</li>
<li><p>Zustand provides built-in middleware for local storage persistence, which saved us time and effort in implementing our own solution.</p>
</li>
<li><p>Out of the box, Zustand offers better performance, as it updates only the relevant parts of the state without unnecessary re-renders.</p>
</li>
<li><p>Zustand is easier to scale. Should we need to manage more complex states, such as loading or error handling, or handle more sophisticated logic in the future, it can grow with the project.</p>
</li>
</ul>
<h3>UI/UX Design</h3>
<p>For this MVP, I aimed to create a clean, modern, and easy-to-navigate UI that would allow users to start using the tool immediately without the need for a sign-in or sign-up process. The goal was to minimize friction and make the user experience as seamless as possible.</p>
<p>A bright color palette, chosen for its ability to convey a sense of openness and clarity, sets the tone for the UI, thanks to our talented UI/UX designer. She laid the foundation for the design, and I tailored the UX to suit the users’ needs.</p>
<h4>Designing the Swap Feature</h4>
<p>One of the key features I focused on was ensuring users could easily swap dishes if they weren’t happy with their selection.</p>
<p>Initially, I considered allowing users to search through a large list of dishes, which could be intuitive but might not be the most efficient way to save their time. Instead, I designed a modal popup that presents five recommended replacement dishes.</p>
<p>These dishes are randomly generated from the available dish database, ensuring no repeats with the current weekly menu. Offering five options strikes a balance between providing enough variety and avoiding overwhelming the user. I believe this design choice helps users make quick, informed decisions without limiting their options.</p>
<p>Here’s the UI in action:</p>
<img src="https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExbHkzY3h0Z3o0eDl1eG92MDAwMzRvcmtycHc0eGc5eTVidmtkbjVtaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/VJlquVE33e0io1POhP/giphy.gif" alt="" style="display:block;margin:0 auto" />

<h3>Overcoming Challenges and Delivering the Project</h3>
<p>Just two sprints into the project, our team faced some challenges when two members became less active, and our UI/UX designer, who was also expected to contribute to development, had to step back.</p>
<p>By the time we were ready to dive into the development phase, it was just me. Although I briefly considered quitting, I decided to push forward and deliver the project on my own.</p>
<p>I focused on core feature development and refining the user experience. Despite the challenges, I was able to successfully complete the product!</p>
<p>I shared my full journey on Twitter.</p>
<p><a class="embed-card" href="https://twitter.com/czhoudev/status/1890735259190440170">https://twitter.com/czhoudev/status/1890735259190440170</a></p>

<p>Special thanks to @numulaa for laying the foundation for the design, and you can find her work here on <a href="https://github.com/numulaa">GitHub</a>.</p>
<h3>Check out the project!</h3>
<p>I’m excited to share that this project is open source! You can explore the code, contribute, or just check it out on <a href="https://github.com/chloezhoudev/schedulicious">GitHub</a>. If you found it useful or interesting, feel free to give it a star ⭐—it would mean a lot!</p>
<p>Try the live <a href="https://schedulicious.vercel.app/">demo</a> or watch the YouTube walkthrough to see it in action!</p>
<p><a class="embed-card" href="https://www.youtube.com/watch?v=M74mX-gAlKo">https://www.youtube.com/watch?v=M74mX-gAlKo</a></p>

<p>💬 Got feedback? I’d love to hear your thoughts! Feel free to open an issue on GitHub or drop me a message.</p>
<p>I’ll continue improving and iterating on it—stay tuned for updates! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[What I Learned from My First Legacy Frontend Project]]></title><description><![CDATA[This was my first time taking over a legacy frontend codebase - one that had been around for over five years.
The system was built with React 16.8 and relied heavily on class components. This code was]]></description><link>https://chloezhou.dev/what-i-learned-from-my-first-legacy-frontend-project</link><guid isPermaLink="true">https://chloezhou.dev/what-i-learned-from-my-first-legacy-frontend-project</guid><category><![CDATA[React]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[webdev]]></category><category><![CDATA[legacycode]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 21 Mar 2026 08:11:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/47d2355d-5110-4d48-aa45-a94e73831ad4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This was my first time taking over a legacy frontend codebase - one that had been around for over five years.</p>
<p>The system was built with React 16.8 and relied heavily on class components. This code was tightly coupled, hard to follow, and even harder to extend. Some components had over thousand lines, with tangled logic and strong dependencies. If you change one thing, you couldn't be sure what else might break.</p>
<p>My task sounded straightforward: support a new business type with some custom fields and a different layout. But as I dug in, I realized it was more than just adding a few inputs. I had to reuse existing logic, introduce new behaviors, and make sure I didn't break anything already in use. <strong>It wasn't just development - it was kind of software surgery.</strong> Every change had to be precise. One wrong move, and things could break in ways you wouldn't expect.</p>
<p>This article doesn't dive into technical implementation, nor is it meant to complain about how messy legacy code can be. Instead, I want to document the three main pitfalls I encountered during my first legacy delivery - and the three key lessons I learned from them.</p>
<p><strong>Pit #1: Don't Refactor Logic Without Fully Understanding the System First</strong></p>
<p>Before I started the actual delivery of the new feature, I had a bit of extra time, so I decided to take the opportunity to familiarize myself with the module that I was about to modify. As I dug into the code, I noticed that some parts of the logic had very tight coupling. This made me think that it might be worth refactoring those sections as part of the preparation for the upcoming delivery. It seemed like a good idea at the time.</p>
<p><strong>But I made a crucial mistake. I didn't have a complete understanding of the logic I was about to refactor.</strong> I thought I had a good grasp of it, but in reality, I didn't know the ins and outs of the code well enough. I didn't fully comprehend how it interacted with other parts of the system, what its true dependencies were, or how the inputs and outputs were structured.</p>
<p>Here's the key lesson: <strong>when you're refactoring code, you need to have 100% clarity on the logic you're working with.</strong> That doesn't mean understanding the entire system, but understanding <strong>your specific module</strong> - what dependencies it has, and what its inputs and outputs are - is essential.</p>
<p>Some of you might be thinking, "Why not just write tests first, and then refactor? Isn't that what TDD is for?" <strong>Well, yes… and no.</strong> While TDD is a widely recommended approach, and it could certainly help in a clean, modern codebase, the reality is quite different when you're dealing with a legacy system like the one I was working with. And let's be honest - who really enjoys writing tests?</p>
<p>After I refactored the code, I found that the behavior had changed unexpectedly. This made it difficult for me to trace back the original logic and behavior, as I had unintentionally altered it without fully understanding it in the first place.</p>
<p>To fix this, I needed to identify where the issue came from. I ran tests in my UAT environment, where I could compare the new behavior with the original. By doing this, I was able to debug the inconsistencies between what was happening in my local development environment and the behavior in the higher environment. This process helped me pinpoint where my logic change had led to unexpected outcomes, and I was able to correct the mistake.</p>
<p><strong>So the core lesson here:</strong> never blindly refactor code without fully understanding the business context and logic behind it. Even if it seems like a "pure UI improvement", it can have unintended consequences that affect the overall system.</p>
<p><strong>Pit #2: Don't Wait Until You Fully Understand the System to Start Refactoring - Embrace the "Incremental Approach"</strong></p>
<p>While it's essential to have a solid understanding of the logic you're working with before making changes (as discussed in the first pit), <strong>there's a fine line between being cautious and getting stuck in analysis paralysis.</strong> The key lesson here is: don't wait to understand every single detail of the system before taking action. Instead, aim for an incremental approach - make small, controlled changes and expand your understanding as you go.</p>
<p>The reality is that achieving a comprehensive understanding of the entire system is nearly impossible, especially when working with complex legacy code. The system is intricate, with too many moving parts, and expecting to grasp it all upfront can lead to endless analysis with little actual progress.</p>
<p><strong>So, what's the solution?</strong></p>
<p><strong>Find a reasonable entry point where you can start making small, manageable changes</strong> - this will help build your confidence and gradually expand your understanding. For example, when approaching frontend tasks, it's common practice to start with static UI. Static UI tends to have simpler, more isolated logic, making it a good entry point for understanding the structure of the code.</p>
<p>Take a look at how the static UI is built. Focus on things like the <code>render</code> function in your React components, which handles the visual part of the UI. This part is generally less complicated, so it gives you a solid foundation to start making small adjustments. Once you're comfortable with that, you can move on to more interactive parts of the code, tackling them one step at a time.</p>
<p>This incremental approach works especially well with legacy systems. Trying to understand everything upfront before making any changes can slow you down unnecessarily. Instead, by taking it step-by-step, you can start making progress right away and build your understanding as you go.</p>
<p><strong>Pit #3: Migrating Lifecycle Methods Isn't Just About Replacing Them</strong></p>
<p>When I took over this legacy code, I was already accustomed to React 18 and functional components with hooks, so I wasn't very familiar with class components and their lifecycle methods - especially the ones like <code>componentWillReceiveProps</code>, which is deprecated, and <code>getDerivedStateFromProps</code>, which is the recommended alternative. My initial instinct was to replace these deprecated methods with their newer counterparts. But soon, I realized that migration isn't just about swapping one lifecycle method for another.</p>
<p>React provides alternatives, such as <code>getDerivedStateFromProps</code> for <code>componentWillReceiveProps</code>. However, migrating lifecycle methods isn't as simple as finding a "replacement." Each lifecycle method is triggered at specific points in the component's lifecycle and serves particular use cases. Understanding how and when each method is invoked is critical. Without this understanding, you risk introducing new issues or inconsistencies.</p>
<p>Take my specific case, for example. In the project I was working on, <code>componentWillReceiveProps</code> was in use, and my goal was to migrate it to <code>getDerivedStateFromProp</code>s. The reason for this migration was that the new requirement was to update the component's state based on new props, which made <code>getDerivedStateFromProps</code> a suitable alternative. However, there's a <strong>catch</strong> - <code>componentWillReceiveProps</code> and <code>getDerivedStateFromProps</code> have different behaviors and usage patterns.</p>
<p>After analyzing the existing code, I realized that <code>componentWillReceiveProps</code> was already being used to update the state based on new props, but it was updating different properties of the state. Given that the new requirements aligned with what <code>getDerivedStateFromProps</code> is intended for - updating state based on new props - I saw an <strong>opportunity</strong> to merge the new requirements with the existing logic. By doing this, I was able to combine the old logic with the new behavior, making the migration smoother.</p>
<p>Lesson learned: Migrating lifecycle methods isn't just about replacing deprecated methods with the new ones React provides. It's about understanding the logic behind each method, when it's triggered, and whether it fits your specific needs. In my case, I needed to ensure that the new and old logic could be combined effectively before proceeding with the migration.</p>
<p><strong>Respect the System, Respect the Unknown</strong></p>
<p>Legacy systems aren't monsters. They've grown over time to support real, often messy business needs. The complexity you see is rarely accidental - it reflects years of decisions, constraints, and trade-offs.</p>
<p>After delivering my first feature on this project, I started to see things more clearly. Respecting the system doesn't mean staying hands-off - it just means knowing what's already there, and being cautious about what might break when you change it. <strong>You can still refactor boldly, but only if you understand the impact.</strong></p>
<p>When things feel overwhelming, it helps to find one small, solid starting point. Something you're sure about. From there, piece by piece, you build up enough context to move forward. That's how I've learned to work with legacy code - not by fully mastering it first, but by finding my footing one step at a time.</p>
<p>What about you? Have you worked on legacy projects? Let me know below!</p>
]]></content:encoded></item><item><title><![CDATA[A Real Performance Bug I Found and Fixed — Step by Step]]></title><description><![CDATA[A few days ago, while working on a bug at work, I noticed something odd: after I fixed the bug, the default client name value in a dropdown component took almost 2 to 3 seconds to appear after the pag]]></description><link>https://chloezhou.dev/a-real-performance-bug-i-found-and-fixed-step-by-step</link><guid isPermaLink="true">https://chloezhou.dev/a-real-performance-bug-i-found-and-fixed-step-by-step</guid><category><![CDATA[React]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[webdev]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 21 Mar 2026 04:09:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/93d2b7ab-8038-4824-9316-425c4de800eb.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few days ago, while working on a bug at work, I noticed something odd: after I fixed the bug, the default client name value in a dropdown component took almost 2 to 3 seconds to appear after the page loaded. That kind of lag is probably unacceptable to users, so I know I had to dig deeper.</p>
<p>Turns out, it was a performance issue.</p>
<p>We always hear advice like “don’t optimize prematurely” or “only solve performance problems when they exist.” Well, this was the first time I actually ran into one myself — and I think the debugging process is worth documenting.</p>
<p>In this post, I’ll walk through how I tracked down the cause of the slowdown. I can’t share the actual code or logs because of company policy, but I’ll reconstruct the process using pseudocode and reasoning steps. If you’re a frontend developer wondering how to approach a performance issue in a React app, I hope this gives you something concrete to take away.</p>
<h2>If it’s not the API call, what is it?</h2>
<p>When I first noticed the lag in how the client name appeared, my instinct was to check the <strong>Network</strong> tab. I opened DevTools and placed the network panel side-by-side with the UI so I could watch what was happening in real time. Sure enough, the API call to fetch client name came back quickly with a 200 OK. But even though the data returned fast, the client name still appeared in the dropdown field with a noticeable delay.</p>
<p>That made me pause — if it’s not the network latency, then what is? Could it be something in the frontend rendering cycle? Maybe <a href="https://formik.org/">Formik</a>? Maybe <a href="https://react.dev/learn/scaling-up-with-reducer-and-context">context</a>?</p>
<p>Here’s a bit of background. The client name is fetched from an API call and saved into the <code>userInfo</code> object, which lives in a global context. Formik pulls its initial values from this context. But initial values alone aren’t reactive — they don’t update just because the context does. So after the context gets the latest client name, I also call <code>setFieldValue</code> to make sure the form reflects the updated value.</p>
<p>At this point, I suspected the delay might be happening somewhere in that chain — from receiving the data to updating the form. So the question became: <strong>where exactly is the bottleneck?</strong></p>
<h2>Verifying data flow from API to form</h2>
<p>My next step was to check how state updates propagated after the API call. The question was: after updating the context with the new client name, how exactly does that value reach the Formik form? I suspected a <code>useEffect</code> inside the page component (called Summary) might be responsible for syncing the values. So I added logs to verify the data flow step by step.</p>
<h3>Step 1: Log inside Summary’s useEffect to confirm form syncing</h3>
<pre><code class="language-javascript">useEffect(() =&gt; {
  console.log('[Summary] dealInfo.clientName effect triggered:', dealInfo?.clientName);

  if (dealInfo?.clientName?.trim()) {
    setFieldValue('summary.clientName', dealInfo.clientName);
  }
}, [dealInfo?.clientName]);
</code></pre>
<p>This log shows when the <code>dealInfo.clientName</code> value changes and triggers the code that updates the Formik field. It helps verify whether the effect runs immediately after the context updates — or if there’s any unexpected delay.</p>
<h3>Step 2: Log right after receiving the API response</h3>
<p>The <code>clientName</code> comes from an API call. Once the response is received, I update the global context via <code>setDealInfo</code>. To confirm that part was working correctly, I added a log right before updating the context:</p>
<pre><code class="language-javascript">setDealInfo(prev =&gt; {
  const updated = {
    ...prev,
    clientName: response.clientName, // from API response
  };
  console.log('[setDealInfo] triggered with:', updated);
  return updated;
});
</code></pre>
<p>This shows exactly when the client name arrives from the backend and enters the shared state.</p>
<h3>Here’s what I saw:</h3>
<p>On initial render, the useEffect ran quickly — but <code>clientName</code> was still empty, so <code>setFieldValue</code> was not called.</p>
<p>Then nothing happened for about 2–3 seconds.</p>
<p>After that delay, <strong>both</strong> the <code>setDealInfo</code> log and the <code>useEffect</code> log appeared almost at the same time.</p>
<p>The dropdown updated <strong>immediately</strong> after that.</p>
<p>This clearly shows that the 2–3 second delay was not due to slow state propagation or form logic. The delay happened <strong>before</strong> the context or form had any chance to react — meaning:</p>
<p>✅ <strong>the bottleneck was simply the API call being slow to return.</strong></p>
<h2>Measure the API Call Duration Precisely</h2>
<p>Now that I suspect the API call might be slow, the next step is to confirm this with accurate timing logs.</p>
<p>I added these logs inside the <code>loadClientDetails</code> function — the one that calls the backend API to fetch client name:</p>
<pre><code class="language-javascript">console.time('[loadClientDetails] API call');

const response = await axios.get('/api/clientDetails', { params: { externalId } });

console.timeEnd('[loadClientDetails] API call');

setDealInfo(prev =&gt; {
  const updated = {
    ...prev,
    clientName: response.data.clientName,
  };
  console.log('[setDealInfo] triggered with:', updated);
  return updated;
});
</code></pre>
<p>This lets me measure exactly how long the API request takes from start to finish.</p>
<h3>What I saw:</h3>
<p>I ran the test 4 times, and the results varied wildly:</p>
<ul>
<li><p>About 4000 ms (4 seconds)</p>
</li>
<li><p>About 8000 ms (8 seconds)</p>
</li>
<li><p>About 15000 ms (15 seconds)</p>
</li>
<li><p>About 1000 ms (1 second)</p>
</li>
</ul>
<p>This confirmed without doubt that the API is the bottleneck and sometimes responds very slowly.</p>
<h2>Minimal Fix: Show Loading Clearly, Don’t Wait in Confusion</h2>
<p>The simplest and most effective fix was to explicitly handle the loading state on the Summary page.</p>
<p>Since the page already had a loading spinner for the segment data (fetched via <a href="https://tanstack.com/query/v5/docs/framework/react/reference/useQuery">useQuery</a>), I added a similar one for the client name:</p>
<pre><code class="language-javascript">if (segmentIsLoading) {
  return &lt;LoadingSpinner label="Loading segment..." /&gt;;
}

if (!dealInfo.clientName) {
  return &lt;LoadingSpinner label="Loading client name..." /&gt;;
}
</code></pre>
<p>This way, users immediately see that data is loading — instead of staring at an empty field, wondering if something’s broken.</p>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mf8n213oh2j7rfhpwsq2.gif" alt="A small loading indicator makes a big difference in clarity." style="display:block;margin:0 auto" />

<p>✅ Problem solved. Happy debugging.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Express + TypeScript + Prisma to Render (2025): What Went Wrong (and How I Fixed It)]]></title><description><![CDATA[When I deployed my Express + TypeScript + Prisma backend to Render, I didn’t expect to spend an entire afternoon chasing down one error after another — but that’s exactly what happened. This post is a]]></description><link>https://chloezhou.dev/deploying-express-typescript-prisma-to-render-2025-what-went-wrong-and-how-i-fixed-it</link><guid isPermaLink="true">https://chloezhou.dev/deploying-express-typescript-prisma-to-render-2025-what-went-wrong-and-how-i-fixed-it</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[prisma]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Chloe Zhou]]></dc:creator><pubDate>Sat, 21 Mar 2026 02:28:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69bdf388475ca1797455d60f/66d9666b-185a-4db8-9bc2-50f5a3fd5f00.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I deployed my <strong>Express + TypeScript + Prisma</strong> backend to Render, I didn’t expect to spend an entire afternoon chasing down one error after another — but that’s exactly what happened. This post is a personal log of all the unexpected problems I hit, what they actually meant, and what I did to get things working again.</p>
<p>I’m writing this down in case someone else (or future me) runs into the same stack of issues and needs a sanity check.</p>
<hr />
<h3>🔴 Error 1: <code>process</code> and <code>console</code> not found in TypeScript</h3>
<h4>💥 What was the error?</h4>
<p>During the build, I saw this in the logs:</p>
<pre><code class="language-plaintext">Cannot find name ‘process’.
Cannot find name ‘console’.
</code></pre>
<p>TypeScript didn’t seem to recognize basic Node.js globals. I thought this was weird — these should be available by default, right?</p>
<h4>🧪 What I tried</h4>
<p>I added the following to my <code>tsconfig.json</code>:</p>
<pre><code class="language-json">{
  "compilerOptions": {
    "types": ["node"]
  }
}
</code></pre>
<p>…only to be greeted with:</p>
<pre><code class="language-plaintext">Cannot find type definition file for 'node'
</code></pre>
<p>I had <code>@types/node</code> installed — but it was under <code>devDependencies</code>.</p>
<h4>✅ What actually fixed it</h4>
<p>Turns out Render doesn’t install <code>devDependencies</code> during production builds by default. Once I moved <code>@types/node</code> to <code>dependencies</code>, the build succeeded.</p>
<pre><code class="language-plaintext">npm install @types/node --save
</code></pre>
<p>✔️ <strong>Lesson learned</strong>: If your app builds before running (like most TypeScript setups), your build-time tools and types must be in <code>dependencies</code>, not <code>devDependencies</code>.</p>
<h3>🔴 Error 2: Cannot find index.js on Render</h3>
<h4>💥 What was the error?</h4>
<p>Another build, another facepalm:</p>
<pre><code class="language-plaintext">Error: Cannot find module '/opt/render/project/src/index.js'
</code></pre>
<p>Render was trying to start my app using <code>node index.js</code>, but my compiled code lived in <code>dist/index.js</code>.</p>
<h4>🧪 What I tried</h4>
<p>I updated <code>package.json</code>:</p>
<pre><code class="language-plaintext">{
  "main": "dist/index.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  }
}
</code></pre>
<p>Still didn’t work.</p>
<h4>✅ What actually fixed it</h4>
<p>Render’s <strong>Start Command</strong> was still set to <code>node index.js</code> in the Dashboard.</p>
<p>Once I changed it to:</p>
<pre><code class="language-plaintext">npm start
</code></pre>
<p>…everything clicked into place.</p>
<p>✔️ <strong>Lesson learned</strong>: Your local scripts don’t override what Render runs. Double-check the <strong>Start Command</strong> and <strong>Build Command</strong> fields in the Render Dashboard.</p>
<h3>🔴 Error 3: @prisma/client did not initialize</h3>
<h4>💥 What was the error?</h4>
<pre><code class="language-plaintext">@prisma/client did not initialize yet. Please run "prisma generate" and try to import it again.
</code></pre>
<p>At this point, I wasn’t even surprised anymore.</p>
<h4>🧪 What I tried</h4>
<p>I checked my build script. It was just:</p>
<pre><code class="language-plaintext">"build": "tsc"
</code></pre>
<p>…but Prisma Client needs to be generated <em>before</em> TypeScript compiles the code that imports it. Oops.</p>
<p>Also, I noticed my <code>schema.prisma</code> had a custom output path like this:</p>
<pre><code class="language-plaintext">generator client {
  provider = "prisma-client-js"
  output   = "../src/generated/prisma"
}
</code></pre>
<h4>✅ What actually fixed it</h4>
<p>I updated the Prisma generator config to use the default output path (which lives inside <code>node_modules/.prisma/client</code>):</p>
<pre><code class="language-plaintext">generator client {
  provider = "prisma-client-js"
  output   = "../node_modules/.prisma/client"
}
</code></pre>
<p>I updated the build script to ensure Prisma Client gets generated before compilation:</p>
<pre><code class="language-plaintext">"build": "prisma generate --no-engine &amp;&amp; tsc"
</code></pre>
<p>I also updated the import to:</p>
<pre><code class="language-plaintext">import { PrismaClient } from '@prisma/client';
</code></pre>
<p>✔️ <strong>Lesson learned</strong>: Prisma Client must be generated before compilation. Also, using the default output path avoids a lot of edge cases — especially when deploying.</p>
<h3>✅ Final Checklist (So I Don’t Forget Again)</h3>
<p>Here’s a checklist of everything I ended up fixing:</p>
<ul>
<li><p>Move <code>@types/node</code> from <code>devDependencies</code> → <code>dependencies</code></p>
</li>
<li><p>Use start: <code>node dist/index.js</code> in scripts</p>
</li>
<li><p>Set Build Command to <code>npm run build</code> on Render</p>
</li>
<li><p>Set Start Command to <code>npm start</code> on Render</p>
</li>
<li><p>Use <code>prisma generate</code> before <code>tsc</code> in your build step</p>
</li>
<li><p>Avoid customizing Prisma output unless you have to — use the default <code>node_modules/.prisma/client</code></p>
</li>
<li><p>Use <code>--no-engine</code> with <code>prisma generate</code> if you’re deploying or using Prisma Accelerate</p>
</li>
</ul>
<h3>One last thing</h3>
<p>If you’re here because your Render build failed with some weird Prisma or TypeScript error — you’re not alone. It’s usually something small, just hiding in plain sight.</p>
<p>Hope this helps someone else. If you’ve run into other gotchas with this stack, feel free to share — I'd love to hear what tripped you up too — drop it in the comments.</p>
<hr />
<p>🛠 Tech Stack / Environment</p>
<ul>
<li><p>Node.js: 18.20.5</p>
</li>
<li><p>Express: 5.1.0</p>
</li>
<li><p>TypeScript: 5.8.3</p>
</li>
<li><p>Prisma: 6.6.0</p>
</li>
<li><p>@prisma/client: 6.6.0</p>
</li>
<li><p>ts-node: 10.9.2</p>
</li>
<li><p>Render deployment: May 2025</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>