<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Prashant Bansal]]></title><description><![CDATA[Engineer | Entrepreneur | Author]]></description><link>https://prashantb.me/</link><generator>Ghost 5.80</generator><lastBuildDate>Sat, 18 Apr 2026 11:19:41 GMT</lastBuildDate><atom:link href="https://prashantb.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Your AI Can't Explain Your Product Because Your Backend Is a Mess]]></title><description><![CDATA[<p>If your team can&#x2019;t answer &#x201C;why did this happen?&#x201D; in 30 seconds, adding AI will only make it worse</p>
<p>A customer asks a simple question:</p>
<p>&#x201C;Why was I denied?&#x201D;</p>
<p>Your team response?<br>
&#x2022;	Support escalates<br>
&#x2022;	Engineering digs through logs<br>
&#x2022;	Someone pieces together</p>]]></description><link>https://prashantb.me/your-ai-cant-explain-your-product-because-your-backend-is-a-mess/</link><guid isPermaLink="false">69cc81dfd74fec1fb3af2dac</guid><category><![CDATA[backend]]></category><category><![CDATA[ai]]></category><category><![CDATA[code]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 01 Apr 2026 02:27:04 GMT</pubDate><content:encoded><![CDATA[<p>If your team can&#x2019;t answer &#x201C;why did this happen?&#x201D; in 30 seconds, adding AI will only make it worse</p>
<p>A customer asks a simple question:</p>
<p>&#x201C;Why was I denied?&#x201D;</p>
<p>Your team response?<br>
&#x2022;	Support escalates<br>
&#x2022;	Engineering digs through logs<br>
&#x2022;	Someone pieces together a story<br>
&#x2022;	You send a confident answer</p>
<p>And there&#x2019;s a decent chance it&#x2019;s wrong.</p>
<p>Now here&#x2019;s the uncomfortable part:</p>
<p>Adding AI to this system doesn&#x2019;t fix it.<br>
It just makes the wrong answer sound better.</p>
<p>&#x2E3B;</p>
<p>The real problem (that nobody says out loud)</p>
<p>Most companies don&#x2019;t have an &#x201C;AI explainability&#x201D; problem.</p>
<p>They have a decision visibility problem.</p>
<p>Your system makes decisions every day:<br>
&#x2022;	approvals<br>
&#x2022;	rejections<br>
&#x2022;	routing<br>
&#x2022;	pricing<br>
&#x2022;	prioritization</p>
<p>But those decisions are:<br>
&#x2022;	scattered across services<br>
&#x2022;	buried in conditionals<br>
&#x2022;	mixed with side effects<br>
&#x2022;	undocumented in any usable way</p>
<p>So when someone asks why something happened&#x2026;</p>
<p>You investigate.</p>
<p>You don&#x2019;t know.</p>
<p>&#x2E3B;</p>
<p>Why this becomes a business problem fast</p>
<p>At small scale, this is annoying.</p>
<p>At scale, it&#x2019;s expensive.<br>
&#x2022;	Support slows down<br>
&#x2022;	Customers lose trust<br>
&#x2022;	Engineers get pulled into every edge case<br>
&#x2022;	AI outputs become unreliable<br>
&#x2022;	Decision-making becomes un-auditable</p>
<p>And suddenly &#x201C;why did this happen?&#x201D; becomes one of the most expensive questions in your company.</p>
<p>&#x2E3B;</p>
<p>The fix is simpler than you think</p>
<p>Don&#x2019;t start with AI.</p>
<p>Start with clarity.</p>
<p>Pick one decision your team constantly gets asked about.</p>
<p>Then make it visible while it runs.</p>
<p>Capture:<br>
&#x2022;	inputs (what data was used)<br>
&#x2022;	rules (what was checked)<br>
&#x2022;	branch (what path was taken)<br>
&#x2022;	outcome (what happened)<br>
&#x2022;	reason (in plain English)</p>
<p>That&#x2019;s it.</p>
<p>&#x2E3B;</p>
<p>What this looks like in practice</p>
<p>A support rep should see something like:</p>
<p>&#x2E3B;</p>
<p>Decision: Customer Routing<br>
Plan: Growth<br>
ARR: $18,000</p>
<p>Checks<br>
&#x2022;	ARR &gt; $50k &#x2192; No<br>
&#x2022;	Enterprise plan &#x2192; No</p>
<p>Outcome<br>
Standard queue</p>
<p>Reason<br>
Customer does not meet enterprise routing criteria.</p>
<p>&#x2E3B;</p>
<p>No escalation. No guessing. No engineer needed.</p>
<p>&#x2E3B;</p>
<p>Where most teams go wrong</p>
<p>This is where things usually derail.</p>
<p>Leaders hear this and jump to:<br>
&#x2022;	&#x201C;Let&#x2019;s redesign the backend&#x201D;<br>
&#x2022;	&#x201C;Let&#x2019;s build a graph system&#x201D;<br>
&#x2022;	&#x201C;Let&#x2019;s add AI explanations everywhere&#x201D;</p>
<p>That&#x2019;s overkill.</p>
<p>You don&#x2019;t need to fix everything.</p>
<p>You need to fix the places where confusion costs you money.</p>
<p>&#x2E3B;</p>
<p>A simple test (most systems fail this)</p>
<p>Take a real case and ask:</p>
<p>Can someone outside engineering explain this decision in under 30 seconds?</p>
<p>If not:<br>
&#x2022;	your system isn&#x2019;t clear<br>
&#x2022;	your AI won&#x2019;t be reliable<br>
&#x2022;	your team is scaling confusion</p>
<p>&#x2E3B;</p>
<p>Where AI actually fits</p>
<p>Once your decisions are structured like this:</p>
<p>AI becomes powerful:<br>
&#x2022;	generates consistent explanations<br>
&#x2022;	supports customers instantly<br>
&#x2022;	audits decisions<br>
&#x2022;	surfaces patterns</p>
<p>But without structure?</p>
<p>AI will fill the gaps.</p>
<p>And it will sound convincing doing it.</p>
<p>&#x2E3B;</p>
<p>The takeaway</p>
<p>This isn&#x2019;t about better models.</p>
<p>It&#x2019;s about better systems.<br>
&#x2022;	If your system is clear &#x2192; AI amplifies it<br>
&#x2022;	If your system is messy &#x2192; AI hides it</p>
<p>Start with one decision.</p>
<p>Make it visible.<br>
Make it testable.<br>
Make it human-readable.</p>
<p>Then layer AI on top.</p>
<p>&#x2E3B;</p>
<p>Because right now?</p>
<p>Your AI isn&#x2019;t explaining your product.</p>
<p>It&#x2019;s guessing.</p>
<p>And your customers can feel it.</p>
]]></content:encoded></item><item><title><![CDATA[From Pipeline Purgatory to Data Nirvana]]></title><description><![CDATA[<h2 id="the-ghost-in-the-machine-remembering-the-bad-old-days">The Ghost in the Machine: Remembering the Bad Old Days</h2><p>Picture this: It&apos;s 2:47 AM on a Tuesday. My phone buzzes with that distinctive Slack notification sound that still triggers my fight-or-flight response. The nightly ETL job has failed. Again. Something about a malformed JSON response from</p>]]></description><link>https://prashantb.me/from-pipeline-purgatory-to-data-nirvana/</link><guid isPermaLink="false">68bfad92d74fec1fb3af2d69</guid><category><![CDATA[data]]></category><category><![CDATA[data pipeline]]></category><category><![CDATA[bigquery]]></category><category><![CDATA[fivetran]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[etl]]></category><category><![CDATA[elt]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Tue, 09 Sep 2025 04:36:58 GMT</pubDate><content:encoded><![CDATA[<h2 id="the-ghost-in-the-machine-remembering-the-bad-old-days">The Ghost in the Machine: Remembering the Bad Old Days</h2><p>Picture this: It&apos;s 2:47 AM on a Tuesday. My phone buzzes with that distinctive Slack notification sound that still triggers my fight-or-flight response. The nightly ETL job has failed. Again. Something about a malformed JSON response from an API that worked perfectly fine yesterday.</p><p>I drag myself to my laptop, squinting at terminal windows filled with stack traces that might as well be hieroglyphics. The culprit? Facebook changed a field from <code>campaign_id</code> to <code>campaignId</code>. One capital letter. Four hours of sleep, gone.</p><p>This was a day in my life circa 2018. Every pipeline was a house of cards built on quicksand during an earthquake. I had more monitoring dashboards than actual dashboards. My GitHub commits read like the diary of someone slowly losing their grip on reality: &quot;Fixed the thing,&quot; &quot;Really fixed the thing this time,&quot; &quot;Why won&apos;t you work,&quot; &quot;Please work,&quot; &quot;IT WORKS DON&apos;T TOUCH ANYTHING.&quot;</p><p>The worst part? I thought this was normal. We all did. It was our shared trauma, swapped like war stories at conferences over overpriced craft beer.</p><p>Then everything changed.</p><h2 id="the-day-i-stopped-writing-etl-scripts">The Day I Stopped Writing ETL Scripts</h2><p>Here&apos;s what happened: Our startup was growing faster than our infrastructure could handle. We had data scattered across seventeen different systems (yes, I counted), each with its own special snowflake API. Our analysts were begging for unified reporting. The CEO wanted real-time dashboards. And there I was, maintaining a Frankenstein&apos;s monster of Apache Airflow DAGs held together by environmental variables and prayer.</p><p>A colleague mentioned Fivetran casually, the way you&apos;d recommend a good taco place. &quot;It just works,&quot; he said. Those three words should have been a red flag. Nothing in data engineering &quot;just works.&quot;</p><p>But desperation makes you try things.</p><p>I logged into Fivetran&apos;s interface expecting the usual: incomprehensible documentation, configuration files that require a PhD in YAML, and at least three authentication methods that would somehow still fail. Instead, I got something that looked suspiciously like it was designed by someone who actually had to use it.</p><p>The PostgreSQL connector asked for five things:</p><ul><li>Host</li><li>Port</li><li>Database name</li><li>Username</li><li>Password</li></ul><p>That&apos;s it. No SSH tunnel configuration. No SSL certificate wrangling. No sacrificial offerings to the networking gods.</p><p>I clicked &quot;Test Connection,&quot; fully prepared for the familiar red error message. Instead: a green checkmark. My production database was talking to Fivetran. Just like that.</p><p>The cynic in me was screaming. This had to be a trick.</p><h2 id="the-moment-everything-clicked-literally">The Moment Everything Clicked (Literally)</h2><p>Within an hour&#x2014;and I&apos;m not exaggerating&#x2014;I had connected:</p><ul><li>Our production PostgreSQL database (with CDC enabled for real-time replication)</li><li>Segment&apos;s entire event stream (goodbye, manual webhook handlers)</li><li>Facebook Ads Manager (no more rate limit nightmares)</li><li>Stripe billing data (with automatic schema evolution)</li><li>Even our quirky internal REST API (using Fivetran&apos;s Function connector)</li></ul><p>Each connection followed the same pattern: authenticate, select tables, choose sync frequency, done. It was almost insulting how straightforward it was. Years of accumulated scar tissue from building these integrations by hand, and now they were just... checkboxes.</p><p>But here&apos;s where it got interesting: BigQuery.</p><p>Setting up BigQuery as the destination was equally anticlimactic. Fivetran asked for a service account key, I provided one, and suddenly data started flowing. Not trickling&#x2014;flowing. Schemas materialized automatically in BigQuery, perfectly typed and organized. Tables appeared with names that actually made sense. Foreign keys were preserved. Data types were correctly mapped.</p><p>I watched the sync logs, waiting for the other shoe to drop. An out-of-memory error, maybe. A timeout. Something. Anything to justify my years of suffering.</p><p>Nothing happened. Well, that&apos;s not true&#x2014;everything happened. The data moved. Reliably. Consistently. Boringly.</p><h2 id="the-numbers-that-made-my-cfo-smile">The Numbers That Made My CFO Smile</h2><p>Let me paint you a picture with actual metrics:</p><p><strong>Before Fivetran + BigQuery:</strong></p><ul><li>Pipeline development time: 2-3 weeks per source</li><li>Maintenance overhead: 40% of my time</li><li>Data freshness: 1-24 hours (when things worked)</li><li>Monthly infrastructure costs: ~$8,000 (servers, monitoring, alerting)</li><li>Sleep quality: What sleep?</li></ul><p><strong>After Fivetran + BigQuery:</strong></p><ul><li>Pipeline development time: 1-2 hours per source</li><li>Maintenance overhead: Maybe 5% of my time</li><li>Data freshness: 5-15 minutes for most sources</li><li>Monthly costs: ~$3,000 for Fivetran, ~$1,500 for BigQuery</li><li>Sleep quality: Like a baby (one that actually sleeps)</li></ul><p>We were processing:</p><ul><li>50 million Segment events monthly</li><li>200GB of PostgreSQL changes weekly</li><li>15 marketing platform integrations</li><li>All updating every 15 minutes to 6 hours depending on the source</li></ul><p>The kicker? We scaled from 10GB to 2TB of warehouse data without changing a single configuration. BigQuery just... handled it. No capacity planning meetings. No frantic vertical scaling at 11 PM on a Friday.</p><h2 id="the-plot-twist-learning-to-let-go">The Plot Twist: Learning to Let Go</h2><p>Here&apos;s something nobody tells you about modern data infrastructure: the hardest part isn&apos;t technical&#x2014;it&apos;s psychological.</p><p>I spent the first month after setting up Fivetran checking the logs obsessively. Surely something would break. I&apos;d refresh the sync status page like it was my Twitter feed during an election. Every successful sync felt like I was getting away with something.</p><p>Then came the transformations. Fivetran deliberately doesn&apos;t handle the &quot;T&quot; in ELT, and initially, that bothered me. Where was my complete solution? But this constraint turned out to be liberating. We adopted dbt (data build tool) for transformations, and suddenly our analysts were writing their own SQL models. They weren&apos;t blocked on me anymore. They could iterate, experiment, fail fast&#x2014;all the startup mantras we preached but rarely practiced.</p><p>Our dbt models looked something like this:</p><pre><code class="language-sql">-- models/marts/marketing/campaign_performance.sql
WITH facebook_spend AS (
  SELECT 
    date,
    campaign_name,
    SUM(spend) as daily_spend,
    SUM(impressions) as impressions
  FROM {{ ref(&apos;stg_facebook_ads__campaigns&apos;) }}
  GROUP BY 1, 2
),
revenue_attribution AS (
  SELECT
    DATE(timestamp) as date,
    utm_campaign as campaign_name,
    SUM(revenue) as attributed_revenue
  FROM {{ ref(&apos;fct_conversions&apos;) }}
  WHERE utm_source = &apos;facebook&apos;
  GROUP BY 1, 2
)
SELECT 
  f.date,
  f.campaign_name,
  f.daily_spend,
  f.impressions,
  COALESCE(r.attributed_revenue, 0) as revenue,
  SAFE_DIVIDE(COALESCE(r.attributed_revenue, 0), f.daily_spend) as roas
FROM facebook_spend f
LEFT JOIN revenue_attribution r USING(date, campaign_name)
</code></pre><p>Clean. Testable. Version controlled. The analysts owned it, understood it, and could modify it without fear of breaking production data flows.</p><h2 id="the-uncomfortable-truths">The Uncomfortable Truths</h2><p>Now, let&apos;s talk about the catches, because there are always catches.</p><p><strong>The Price of Convenience:</strong><br>Fivetran&apos;s MAR (Monthly Active Rows) pricing model is clever until you&apos;re syncing that chatty microservice that logs every mouse movement. We had one table that was 80% of our MAR but provided 2% of our value. The conversation went like this:</p><p>Me: &quot;Do we really need to track every page scroll event?&quot;<br>Product Manager: &quot;Absolutely essential.&quot;<br>Me: &quot;It&apos;s costing us $800/month just for scroll data.&quot;<br>Product Manager: &quot;Oh. Maybe daily aggregates are fine.&quot;</p><p><strong>The Connector Gap:</strong><br>Fivetran has 400+ connectors, which sounds like a lot until you need number 401. We use this obscure inventory management system that was apparently coded in someone&apos;s garage in 2003. No connector. No API documentation. Just tears.</p><p>The solution? Fivetran&apos;s Function connector let me write a lightweight Lambda function to bridge the gap. It wasn&apos;t perfect, but it was manageable:</p><pre><code class="language-python">def handler(request, context):
    # Fetch data from obscure system
    weird_api_data = fetch_inventory_data()
    
    # Transform to Fivetran format
    return {
        &quot;state&quot;: {&quot;last_updated&quot;: datetime.now().isoformat()},
        &quot;insert&quot;: {
            &quot;inventory_items&quot;: [
                {&quot;id&quot;: item[&quot;ID&quot;], &quot;quantity&quot;: item[&quot;QTY&quot;], 
                 &quot;updated_at&quot;: item[&quot;LAST_MOD&quot;]}
                for item in weird_api_data
            ]
        }
    }
</code></pre><p><strong>The Black Box Problem:</strong><br>When something does go wrong (and it will, because computers), debugging can be frustrating. Fivetran&apos;s error messages sometimes read like fortune cookies: &quot;Sync failed due to unexpected response.&quot; Thanks, very helpful.</p><p>The support team is responsive, but there&apos;s something unsettling about not being able to SSH into a server and fix things yourself. It&apos;s like being a mechanic who&apos;s only allowed to look at the dashboard lights.</p><h2 id="the-philosophical-question-nobody-asks">The Philosophical Question Nobody Asks</h2><p>Here&apos;s what keeps me up at night now (besides nothing, because my pipelines don&apos;t break): Have we traded too much control for convenience?</p><p>There&apos;s an entire generation of data engineers who will never know the joy of writing a custom PostgreSQL replication slot handler. They&apos;ll never experience the triumph of finally getting that SOAP API to work after three days of XML wrestling. Is something lost in this transaction?</p><p>Maybe. But you know what else is lost? Burnout. Frustration. The opportunity cost of building commodity infrastructure instead of solving actual business problems.</p><p>I used to pride myself on being able to build anything from scratch. Now I pride myself on knowing when not to. That&apos;s growth, I think.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://prashantb.me/content/images/2025/09/diagram-export-9-9-2025-12_49_56-AM.png" class="kg-image" alt loading="lazy" width="1681" height="806" srcset="https://prashantb.me/content/images/size/w600/2025/09/diagram-export-9-9-2025-12_49_56-AM.png 600w, https://prashantb.me/content/images/size/w1000/2025/09/diagram-export-9-9-2025-12_49_56-AM.png 1000w, https://prashantb.me/content/images/size/w1600/2025/09/diagram-export-9-9-2025-12_49_56-AM.png 1600w, https://prashantb.me/content/images/2025/09/diagram-export-9-9-2025-12_49_56-AM.png 1681w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Architecture Diagram</span></figcaption></figure><h2 id="the-verdict-a-love-letter-to-boring-technology">The Verdict: A Love Letter to Boring Technology</h2><p>Fivetran and BigQuery aren&apos;t sexy. They don&apos;t use the latest JavaScript framework (thank god). They won&apos;t impress anyone at a hackathon. They&apos;re boring in the best possible way&#x2014;the way that electricity is boring, or indoor plumbing.</p><p>They work. Consistently. Reliably. Invisibly.</p><p>Our data team has tripled in size, but our data engineering team hasn&apos;t grown at all. We&apos;re handling 100x the data volume with the same headcount. Our analysts are self-sufficient. Our executives have real-time dashboards that actually show real-time data. Our customer success team can see user behavior patterns before users complain.</p><p>Most importantly, I get to focus on interesting problems now:</p><ul><li>Designing data models that will scale for the next five years</li><li>Building predictive models instead of plumbing</li><li>Actually talking to stakeholders about what insights they need</li><li>Teaching analysts SQL optimization techniques</li><li>Having lunch away from my desk</li></ul><p>Is Fivetran + BigQuery the right choice for everyone? Probably not. If you&apos;re Netflix or Uber, you need custom everything. If you&apos;re a 5-person startup, you probably don&apos;t need it yet. But if you&apos;re in that sweet spot&#x2014;growing fast, data becoming critical, engineering resources precious&#x2014;this combination is magic.</p><h2 id="the-final-plot-twist">The Final Plot Twist</h2><p>Remember that obscure inventory system I mentioned? Last month, Fivetran released a connector for it. Turns out we weren&apos;t the only ones suffering.</p><p>I decommissioned my Lambda function with a single click. It felt like saying goodbye to an old friend&#x2014;an annoying, high-maintenance friend who called at 3 AM, but still.</p><p>That&apos;s the thing about the modern data stack: it keeps getting better while you&apos;re sleeping. Literally sleeping. Eight hours a night. It&apos;s revolutionary.</p><p>So here&apos;s my advice to past me, and maybe to you: Stop building pipelines. Start building value. Let Fivetran and BigQuery handle the plumbing. Trust me, your future self will thank you.</p><p>And your phone? It can finally stay on silent.</p><p><em>P.S. - To the three people who will email me about how &quot;real engineers build their own infrastructure&quot;: I built my own for seven years. I have the gray hairs and git history to prove it. Sometimes the most sophisticated engineering decision is choosing not to engineer something. Now if you&apos;ll excuse me, I have a 5 PM meeting to attend. At 5:01, I&apos;ll be logged off, because my pipelines don&apos;t need me anymore. And that&apos;s exactly how it should be.</em></p>]]></content:encoded></item><item><title><![CDATA[Data Lake Challenges and Apache Iceberg]]></title><description><![CDATA[<blockquote>Data storage and processing have evolved rapidly over the past decade, moving from on-site servers to scalable cloud-based systems. These modern solutions, often referred to as data lakes, can handle massive streams of data&#x2014;such as billions of credit card transactions, website interactions, or customer activities&#x2014;all in</blockquote>]]></description><link>https://prashantb.me/data-lake-challenges-and-apache-iceberg/</link><guid isPermaLink="false">6771a4bfd74fec1fb3af2cd4</guid><category><![CDATA[iceberg]]></category><category><![CDATA[apache]]></category><category><![CDATA[datalake]]></category><category><![CDATA[architecture]]></category><category><![CDATA[data]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 01 Jan 2025 01:25:26 GMT</pubDate><content:encoded><![CDATA[<blockquote>Data storage and processing have evolved rapidly over the past decade, moving from on-site servers to scalable cloud-based systems. These modern solutions, often referred to as data lakes, can handle massive streams of data&#x2014;such as billions of credit card transactions, website interactions, or customer activities&#x2014;all in near real-time.</blockquote><h3 id="the-great-data-lake-delusion">The Great Data Lake Delusion<br></h3><p>Sarah stared at her Slack notifications as they multiplied like digital rabbits. The quarterly board presentation was in three hours, and somehow her company&apos;s &quot;state-of-the-art&quot; data lake was reporting that they had simultaneously gained and lost the same 50,000 customers. The marketing team swore they were heroes, the finance team was preparing for bankruptcy, and the data science team had locked themselves in a conference room, muttering about &quot;eventual consistency&quot; like it was some kind of religious mantra.</p><p>This wasn&apos;t supposed to happen. Two years ago, Sarah&apos;s company had invested millions in a &quot;revolutionary&quot; data lake architecture. The consultants promised it would be their competitive advantage. The vendor demos showed beautiful dashboards updating in real-time. The PowerPoint slides were pristine.</p><p><strong>Reality, as usual, had other plans.</strong></p><p>Meanwhile, somewhere in Los Gatos, California, Netflix engineers were having their own existential crisis. Their data lake wasn&apos;t just inconsistent&#x2014;it was actively hostile. Billions of viewing events were scattered across their storage like confetti after a particularly chaotic New Year&apos;s party, and traditional Hive tables were about as reliable as a chocolate teapot under pressure.</p><p>But here&apos;s where the story gets interesting: instead of just complaining about it on Twitter like the rest of us, Netflix actually did something about it. They built Apache Iceberg, and in doing so, accidentally created the solution to every data engineer&apos;s recurring nightmares.</p><h2 id="the-four-horsemen-of-data-lake-hell">The Four Horsemen of Data Lake Hell<br></h2><h3 id="the-everything-is-fine-consistency-crisis">The &quot;Everything is Fine&quot; Consistency Crisis</h3><p>Let&apos;s be brutally honest about data lakes: they were designed by people who clearly never had to explain to a CEO why the revenue numbers changed three times during a single meeting. Traditional data lakes treat data consistency the way teenagers treat curfews&#x2014;more of a suggestion than an actual rule.</p><p>Here&apos;s what typically happens in the wild:</p><ul><li><strong>Team A</strong> updates customer records</li><li><strong>Team B</strong> reads &quot;the latest&quot; data (which is actually from 20 minutes ago)</li><li><strong>Team C</strong> overwrites Team A&apos;s changes without knowing they existed</li><li><strong>Team D</strong> generates a report that makes everyone question reality</li></ul><p>The result? Data that&apos;s about as consistent as a toddler&apos;s nap schedule. You think you know what&apos;s happening, but five minutes later, everything has changed and no one can explain why.</p><h3 id="schema-evolution-aka-lets-break-everything">Schema Evolution: AKA &quot;Let&apos;s Break Everything&quot;</h3><p>&quot;Hey, can we just add a simple field to track customer preferences?&quot;</p><p><em>[Cue the dramatic music and slow-motion disaster footage]</em></p><p>In the pre-Iceberg world, this innocent request would trigger what data engineers lovingly call &quot;the schema apocalypse&quot;:</p><ul><li>Emergency architecture review meetings</li><li>Three-week migration planning sessions</li><li>Mandatory downtime windows at 3 AM on Sunday</li><li>Prayer circles and ritual sacrifices to the database gods</li><li>At least one engineer stress-eating pizza at midnight while rebuilding indexes</li></ul><p>Schema changes in traditional systems are handled with all the grace and elegance of performing heart surgery with gardening tools. It&apos;s technically possible, but everyone involved is going to have a bad time.</p><h3 id="the-multi-user-thunderdome">The Multi-User Thunderdome</h3><p>Modern organizations are basically data zoos where everyone wants to feed the animals at the same time. You&apos;ve got:</p><ul><li>Marketing teams extracting customer behavior patterns like digital archaeologists</li><li>Finance teams generating compliance reports with the urgency of defusing bombs</li><li>Data scientists training ML models that consume resources like teenagers consume pizza</li><li>Operations teams monitoring dashboards like air traffic controllers</li><li>Executives demanding &quot;real-time insights&quot; about data that&apos;s still being processed</li></ul><p>Without proper coordination, this creates a digital version of bumper cars&#x2014;lots of noise, occasional crashes, and someone always ends up dizzy and confused.</p><h3 id="the-historical-data-hoarding-problem">The Historical Data Hoarding Problem</h3><p>Organizations collect data like digital pack rats. Every click, every transaction, every customer sneeze gets stored &quot;for analytics.&quot; But here&apos;s the kicker: storing petabytes of historical data while keeping it performant and cost-effective is like trying to organize a library where books keep multiplying overnight and occasionally change their own content.</p><p>You need the data for compliance (lawyers are scary), analytics (executives demand insights), and machine learning (the algorithms are hungry), but traditional storage solutions handle this about as well as a paper umbrella handles a hurricane.</p><h2 id="apache-iceberg-the-accidental-hero-story">Apache Iceberg: The Accidental Hero Story</h2><p>Back in 2017, Netflix had a problem. Actually, they had several problems, but the big one was that their data infrastructure was buckling under the weight of their own success. Millions of users streaming billions of hours of content generates data at a scale that makes most databases weep quietly in server rooms.</p><p>Their existing Hive tables were failing spectacularly&#x2014;like watching a house of cards collapse in slow motion, except the cards were made of data and the collapse was affecting recommendations for 200+ million subscribers.</p><p>So Netflix did what any sensible engineering organization would do: they built something completely new. Not because they wanted to become open-source heroes (though that&apos;s a nice side effect), but because they literally had no choice. Their business was growing faster than their data infrastructure could handle.</p><p><strong>Apache Iceberg wasn&apos;t born from strategic planning&#x2014;it was born from desperation.</strong></p><p>And thank goodness for that, because the rest of us were drowning too. We just didn&apos;t have Netflix&apos;s resources to build our own life rafts.</p><h2 id="how-iceberg-became-the-data-worlds-superhero">How Iceberg Became the Data World&apos;s Superhero<br></h2><h3 id="acid-transactions-because-chaos-isnt-a-feature">ACID Transactions: Because Chaos Isn&apos;t a Feature</h3><p>Apache Iceberg brings <strong>ACID transactions</strong> to data lakes, which is like giving your data operations a really good therapist. Suddenly, everything that was chaotic and unpredictable becomes calm and orderly:</p><ul><li><strong>Atomicity</strong>: Changes either happen completely or not at all</li><li><strong>Consistency</strong>: Data always makes sense</li><li><strong>Isolation</strong>: Multiple teams can work without accidentally sabotaging each other</li><li><strong>Durability</strong>: Committed changes survive system failures</li></ul><p>It&apos;s the difference between a peaceful meditation garden and a toddler birthday party in terms of chaos levels.</p><h3 id="schema-evolution-without-the-drama">Schema Evolution Without the Drama</h3><p>With Iceberg, adding that customer preference field becomes almost disappointingly simple:</p><pre><code class="language-sql">ALTER TABLE customers ADD COLUMN preferences MAP&lt;STRING, STRING&gt;
</code></pre><p>That&apos;s it. No table rebuilds, no downtime, no midnight emergency deployments. The system just&#x2026; handles it.</p><p>Features include:</p><ul><li><strong>Column additions</strong> that don&apos;t break existing applications</li><li><strong>Type promotions</strong> that happen automatically</li><li><strong>Column renames</strong> with full backward compatibility</li><li><strong>Concurrent schema updates</strong> without teams stepping on each other</li></ul><h3 id="time-travel-because-sometimes-you-need-to-go-back">Time Travel: Because Sometimes You Need to Go Back</h3><p>Iceberg&apos;s <strong>time travel capabilities</strong> are like having a time machine for your data:</p><pre><code class="language-sql">SELECT * FROM sales_data 
FOR TIMESTAMP AS OF &apos;2024-01-01 00:00:00&apos;
</code></pre><p>This makes debugging systematic instead of chaotic. <strong>Snapshot isolation</strong> ensures each team gets their own consistent view of the data.</p><h3 id="storage-management-that-actually-works">Storage Management That Actually Works</h3><p>Iceberg separates metadata from data files, enabling optimizations that feel almost magical:</p><ul><li><strong>Automatic file compaction</strong></li><li><strong>Partition evolution</strong> that adapts to changing patterns</li><li><strong>Metadata-level query pruning</strong> that makes queries fast</li><li><strong>Multi-tier storage</strong> that optimizes costs</li></ul><h2 id="real-world-success-stories">Real-World Success Stories<br></h2><h3 id="the-retail-giants-redemption-arc">The Retail Giant&apos;s Redemption Arc</h3><ul><li>Real-time personalization actually works</li><li>Data consistency issues dropped to near zero</li><li>The 3 AM emergency calls stopped</li><li>Customer satisfaction improved</li></ul><h3 id="the-financial-services-breakthrough">The Financial Services Breakthrough</h3><ul><li>40% reduction in storage costs</li><li>Compliance reports started matching reality</li><li>Happier auditors and executives</li></ul><h3 id="the-developer-experience-revolution">The Developer Experience Revolution</h3><ul><li>Engineers spend time building features, not fighting infra</li><li>Job satisfaction improved</li><li>Attrition dropped</li></ul><h2 id="the-competition-battle-of-the-table-formats">The Competition: Battle of the Table Formats</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Feature</th>
<th>Apache Iceberg</th>
<th>Delta Lake</th>
<th>Apache Hudi</th>
<th>Traditional Hive</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>ACID Transactions</strong></td>
<td>&#x2705; Actually works</td>
<td>&#x2705; Works well</td>
<td>&#x2705; Decent</td>
<td>&#x274C; Good luck</td>
</tr>
<tr>
<td><strong>Schema Evolution</strong></td>
<td>&#x1F3C6; Seamless</td>
<td>&#x2705; Solid</td>
<td>&#x2705; Functional</td>
<td>&#x274C; Requires therapy</td>
</tr>
<tr>
<td><strong>Query Engine Support</strong></td>
<td>&#x1F3C6; Works with all</td>
<td>&#x1F527; Spark-centric</td>
<td>&#x26A0;&#xFE0F; Limited/growing</td>
<td>&#x1F4CA; Broad but ancient</td>
</tr>
<tr>
<td><strong>Partition Evolution</strong></td>
<td>&#x1F3C6; Advanced magic</td>
<td>&#x26A0;&#xFE0F; Basic</td>
<td>&#x26A0;&#xFE0F; Getting there</td>
<td>&#x274C; Not happening</td>
</tr>
<tr>
<td><strong>Time Travel</strong></td>
<td>&#x2705; Native</td>
<td>&#x2705; Built-in</td>
<td>&#x2705; Available</td>
<td>&#x274C; Time is an illusion</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<h2 id="cloud-provider-solutions-the-easy-button">Cloud Provider Solutions: The Easy Button</h2><ul><li><strong>AWS EMR</strong>: Supports all formats, integrates with Glue</li><li><strong>Azure Synapse</strong>: Managed Iceberg with optimizations</li><li><strong>Google Cloud BigLake</strong>: Unified analytics across formats</li></ul><h2 id="the-bottom-line-why-you-should-care">The Bottom Line: Why You Should Care</h2><p><strong>For Executives</strong>: Iceberg reduces operational risk, lowers costs, and speeds up feature delivery.</p><p><strong>For Engineers</strong>: Iceberg removes the drudgery of data lake management, freeing you to build meaningful systems.</p><p><strong>For Everyone Else</strong>: Your dashboards, ML models, and reports actually reflect reality.</p><p>The data revolution is here, and it&apos;s being led by technologies like Apache Iceberg.</p><hr><p><em>Apache Iceberg continues evolving rapidly. The technology landscape changes fast, but the need for reliable, scalable, and sane data operations remains constant.</em></p>]]></content:encoded></item><item><title><![CDATA[Understanding the Command Design Pattern: A Must-Have for Event-Driven Architectures]]></title><description><![CDATA[<p>Imagine you&#x2019;re building a complex application that must respond to various events, such as user actions, system triggers, or external API calls. As your application grows, the code that handles these events can become unwieldy, leading to tightly coupled, hard-to-maintain systems. This is where the <strong>Command Design Pattern</strong></p>]]></description><link>https://prashantb.me/understanding-the-command-design-pattern-in-go-a-must-have-for-event-driven-architectures/</link><guid isPermaLink="false">66bd0d49d74fec1fb3af2c85</guid><category><![CDATA[GO]]></category><category><![CDATA[command]]></category><category><![CDATA[command pattern]]></category><category><![CDATA[event driven]]></category><category><![CDATA[event driven architecture]]></category><category><![CDATA[architecture]]></category><category><![CDATA[golang]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 14 Aug 2024 20:07:39 GMT</pubDate><content:encoded><![CDATA[<p>Imagine you&#x2019;re building a complex application that must respond to various events, such as user actions, system triggers, or external API calls. As your application grows, the code that handles these events can become unwieldy, leading to tightly coupled, hard-to-maintain systems. This is where the <strong>Command Design Pattern</strong> comes in handy.</p><p>The Command Design Pattern is a behavioral design pattern that turns a request into a stand-alone object containing all information about the request. This transformation allows you to parameterize methods with different requests, delay or queue a request&#x2019;s execution, and support undoable operations.</p><p>In this blog post, we&#x2019;ll explore the Command Design Pattern, focusing on how to implement it. We&#x2019;ll walk through a story-driven example that demonstrates why this pattern is essential for modern, event-driven architectures.</p><h3 id="the-problem-unmanageable-event-handling"><strong>The Problem: Unmanageable Event Handling</strong></h3><p>Imagine you&#x2019;re the lead engineer at a startup developing a sophisticated e-commerce platform. The platform needs to handle various user actions&#x2014;placing an order, canceling an order, updating profile information, and more. Initially, everything works fine with a few if-else conditions or a simple switch statement. But as the platform grows, you start noticing issues:</p><p>&#x2022; <strong>Tightly Coupled Code</strong>: Your event handling logic is scattered across the codebase, making it difficult to manage or modify without introducing bugs.</p><p>&#x2022; <strong>Lack of Flexibility</strong>: Adding new actions requires touching multiple parts of the code, leading to longer development cycles.</p><p>&#x2022; <strong>No Undo Support</strong>: Users want to undo certain actions, like canceling an order they just placed, but your system wasn&#x2019;t designed with this in mind.</p><p>Clearly, a better approach is needed&#x2014;enter the Command Design Pattern.</p><h3 id="the-solution-command-design-pattern"><strong>The Solution: Command Design Pattern</strong></h3><p>The Command Design Pattern solves these problems by encapsulating all the details of an operation into a command object. This object includes the operation name, the target of the operation, and any required parameters. By doing so, you can:</p><p>&#x2022; <strong>Decouple the sender and receiver</strong>: The object that initiates the action doesn&#x2019;t need to know anything about the object that performs the action.</p><p>&#x2022; <strong>Queue, log, or undo commands</strong>: Since each command is a standalone object, you can easily store it for later execution, log it, or provide an undo functionality.</p><p>&#x2022; <strong>Add new commands easily</strong>: Extending your system with new commands doesn&#x2019;t require changes to existing code, just the addition of new command objects.</p><p><strong>Implementing the Command Design Pattern in Go</strong></p><p>Let&#x2019;s implement the Command Design Pattern in Go with an example. We&#x2019;ll use a story to keep things engaging: Imagine you&#x2019;re building a task management system where users can add, remove, and mark tasks as completed.</p><p><strong>Step 1: Define the Command Interface</strong></p><p>The first step is to define a Command interface with a Execute method. This method will be implemented by all concrete command types.</p><pre><code class="language-go">package main

import &quot;fmt&quot;

// Command interface
type Command interface {
    Execute()
}</code></pre><p><strong>Step 2: Create Concrete Commands</strong></p><p>Next, we create concrete command types that implement the Command interface. For our task management system, we&#x2019;ll create three commands: <code>AddTaskCommand</code>, <code>RemoveTaskCommand</code>, and <code>CompleteTaskCommand</code>.</p><pre><code class="language-go">// Receiver
type TaskManager struct {
    tasks []string
}

func (t *TaskManager) AddTask(task string) {
    t.tasks = append(t.tasks, task)
    fmt.Println(&quot;Task added:&quot;, task)
}

func (t *TaskManager) RemoveTask(task string) {
    for i, tsk := range t.tasks {
        if tsk == task {
            t.tasks = append(t.tasks[:i], t.tasks[i+1:]...)
            fmt.Println(&quot;Task removed:&quot;, task)
            return
        }
    }
    fmt.Println(&quot;Task not found:&quot;, task)
}

func (t *TaskManager) CompleteTask(task string) {
    fmt.Println(&quot;Task completed:&quot;, task)
}

// Concrete Command for adding a task
type AddTaskCommand struct {
    taskManager *TaskManager
    task        string
}

func (c *AddTaskCommand) Execute() {
    c.taskManager.AddTask(c.task)
}

// Concrete Command for removing a task
type RemoveTaskCommand struct {
    taskManager *TaskManager
    task        string
}

func (c *RemoveTaskCommand) Execute() {
    c.taskManager.RemoveTask(c.task)
}

// Concrete Command for completing a task
type CompleteTaskCommand struct {
    taskManager *TaskManager
    task        string
}

func (c *CompleteTaskCommand) Execute() {
    c.taskManager.CompleteTask(c.task)
}</code></pre><p><strong>Step 3: Implement the Invoker</strong></p><p>The invoker is responsible for executing commands. It doesn&#x2019;t need to know anything about the commands themselves, just that they implement the Command interface.</p><pre><code class="language-go">// Invoker
type TaskInvoker struct {
    commandQueue []Command
}

func (i *TaskInvoker) StoreCommand(command Command) {
    i.commandQueue = append(i.commandQueue, command)
}

func (i *TaskInvoker) ExecuteCommands() {
    for _, command := range i.commandQueue {
        command.Execute()
    }
    i.commandQueue = nil
}</code></pre><p><strong>Step 4: Putting It All Together</strong></p><p>Now, let&#x2019;s see how everything fits together.</p><pre><code class="language-go">func main() {
    taskManager := &amp;TaskManager{}

    addTaskCommand := &amp;AddTaskCommand{
        taskManager: taskManager,
        task:        &quot;Learn Go&quot;,
    }

    removeTaskCommand := &amp;RemoveTaskCommand{
        taskManager: taskManager,
        task:        &quot;Learn Python&quot;,
    }

    completeTaskCommand := &amp;CompleteTaskCommand{
        taskManager: taskManager,
        task:        &quot;Learn Go&quot;,
    }

    invoker := &amp;TaskInvoker{}
    invoker.StoreCommand(addTaskCommand)
    invoker.StoreCommand(removeTaskCommand)
    invoker.StoreCommand(completeTaskCommand)

    invoker.ExecuteCommands()
}</code></pre><p><strong>Why Use the Command Design Pattern in Modern Applications?</strong></p><p>Now that we&#x2019;ve gone through the technical implementation, let&#x2019;s revisit the story and discuss why the Command Design Pattern is so valuable in modern, event-driven architectures.</p><p>1. <strong>Scalability</strong>: As your application grows, you&#x2019;ll inevitably add more features. With the Command Design Pattern, adding new commands doesn&#x2019;t disrupt existing code, making your system more scalable.</p><p>2. <strong>Maintainability</strong>: Decoupling the invoker from the receiver makes your code easier to maintain. You can modify or replace commands without affecting other parts of the system.</p><p>3. <strong>Flexibility</strong>: The pattern provides the flexibility to log, queue, and undo actions. In a real-world e-commerce system, this could mean allowing customers to undo orders or administrators to batch process user actions.</p><p>4. <strong>Testability</strong>: Commands are easy to test in isolation since they encapsulate all the necessary information for the action they perform.</p><p><strong>Conclusion</strong></p><p>The Command Design Pattern is more than just a design pattern&#x2014;it&#x2019;s a powerful tool that can help you build flexible, maintainable, and scalable systems. Whether you&#x2019;re developing a task management app, an e-commerce platform, or any other event-driven application, this pattern is worth considering.</p><p>By adopting the Command Design Pattern in your Go projects, you&#x2019;ll be well-equipped to handle the complexities of modern software development. Plus, the benefits of decoupling and flexibility will pay off in the long run as your application grows and evolves.</p><p>So the next time you&#x2019;re faced with the challenge of managing a myriad of actions in your application, remember the Command Design Pattern and the story of the task management system&#x2014;it just might be the solution you need.</p>]]></content:encoded></item><item><title><![CDATA[Async Generators, the superior async await]]></title><description><![CDATA[<p>An async generator is a special way for computers to do things that take time, such as getting information from the internet. Lets take a deeper dive into it from a real world example.</p><p>Imagine you&apos;re at a magical bakery, and you&apos;re trying to bake a</p>]]></description><link>https://prashantb.me/async-generators-the-superior-async-await/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bc2</guid><category><![CDATA[async generators]]></category><category><![CDATA[generators]]></category><category><![CDATA[Javascript]]></category><category><![CDATA[async]]></category><category><![CDATA[await]]></category><category><![CDATA[asynchronous programming]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 16 Aug 2023 16:42:07 GMT</pubDate><content:encoded><![CDATA[<p>An async generator is a special way for computers to do things that take time, such as getting information from the internet. Lets take a deeper dive into it from a real world example.</p><p>Imagine you&apos;re at a magical bakery, and you&apos;re trying to bake a variety of delicious cakes. You have a helper named Alice, who is a bit of a baking expert. Now, baking takes time, and some steps might require waiting &#x2013; like waiting for the cake to bake in the oven. This is similar to how computers sometimes need to wait for things to happen, like fetching data from the internet.</p><p>Now, in the world of programming, JavaScript is like your magical kitchen. Async generators are like a special recipe book that Alice follows to bake cakes, but with a twist. You see, Alice can&apos;t bake all the cakes at once &#x2013; that would be too overwhelming. Instead, she bakes them one by one, taking breaks when needed.</p><p>Here&apos;s how async generators work in the magical bakery of JavaScript:</p><ol><li><strong>The Recipe Book (Async Generator Function):</strong> Just like a recipe book tells Alice how to bake cakes, an async generator function is a set of instructions for JavaScript on how to do tasks that might take time, like fetching data from the internet or reading a big file. This function is special because it can &quot;pause&quot; and &quot;resume&quot; whenever it needs to wait for something to happen.</li><li><strong>Baking Cakes (Generating Values):</strong> Instead of baking cakes, the async generator function produces values over time. These values are like the cakes that Alice bakes. But instead of baking all the cakes at once, the async generator bakes one &quot;value&quot; at a time, and then takes a break before baking the next one.</li><li><strong>Taking Breaks (Pausing and Resuming):</strong> Imagine Alice is baking a cake, but then realizes she needs more chocolate chips. She pauses, goes to get the chips, and then resumes baking. Similarly, the async generator can &quot;pause&quot; its execution when it needs to wait for something. It tells JavaScript, &quot;Hey, I&apos;m waiting for something, let me know when it&apos;s ready!&quot; Then, when it&apos;s time, it &quot;resumes&quot; where it left off.</li><li><strong>Enjoying Cakes (Consuming Values):</strong> Now, you can be the one enjoying the cakes Alice bakes. In JavaScript, you can &quot;consume&quot; the values the async generator produces. It&apos;s like waiting for Alice to finish baking a cake and then getting to eat it.</li><li><strong>Many Cakes, One by One (Async Iteration):</strong> Just as Alice can bake multiple cakes, the async generator can produce multiple values over time. And just like you&apos;d eat the cakes one by one, JavaScript can &quot;consume&quot; these values in the order they&apos;re baked.</li></ol><p>So, why are async generators useful? Imagine you&apos;re creating a website that shows cute animal pictures from around the world. You need to fetch these pictures from different websites, which can take time. With async generators, your JavaScript code can fetch and show these pictures one by one, without freezing up the whole website. This way, your website stays responsive and enjoyable, even when dealing with slow tasks like fetching data.</p><p>In simple words, async generators help JavaScript handle tasks that might take time, like baking cakes one by one, so your programs stay smooth and your users keep enjoying their experience. Just like Alice&apos;s magical bakery makes sure you enjoy cakes without waiting too long, async generators in JavaScript keep things running smoothly even when there&apos;s waiting involved.</p><p>Lets dive into some code now - </p><pre><code class="language-javascript">// Imagine we have a function that simulates getting data from the internet
function fetchDataFromInternet(delay, data) {
  return new Promise(resolve =&gt; {
    setTimeout(() =&gt; {
      resolve(data);
    }, delay);
  });
}

// Async generator function that fetches data one by one
async function* fetchMultipleData() {
  yield fetchDataFromInternet(2000, &quot;First piece of data&quot;);
  yield fetchDataFromInternet(3000, &quot;Second piece of data&quot;);
  yield fetchDataFromInternet(1500, &quot;Third piece of data&quot;);
}

// Using the async generator
(async () =&gt; {
  for await (const data of fetchMultipleData()) {
    console.log(&quot;Received:&quot;, data);
  }
  console.log(&quot;All data received!&quot;);
})();</code></pre><p>In this example:</p><ol><li><code>fetchDataFromInternet</code> is a simulated function that returns a promise, pretending to fetch data from the internet. It takes a delay parameter to simulate the time it takes to get the data.</li><li><code>fetchMultipleData</code> is the async generator function. It uses the <code>yield</code> keyword to produce values one by one. Each <code>yield</code> represents fetching data from the internet with a different delay.</li><li>The <code>for await...of</code> loop is used to consume the values produced by the async generator. It waits for each value and logs it.</li><li>When you run the code, you&apos;ll notice that each piece of data takes a different amount of time to be received, but the loop doesn&apos;t wait for all of them to finish before processing the next one. This demonstrates the asynchronous nature of the generator.</li><li>Finally, the &quot;All data received!&quot; message is logged when all the data pieces have been fetched and processed.</li></ol><p>Remember, async generators are especially helpful when dealing with tasks that have varying time delays, like fetching data from the internet, reading large files, or processing data in batches. They allow you to handle these tasks efficiently and without freezing up your program.</p>]]></content:encoded></item><item><title><![CDATA[Why I moved from Nodejs to Go]]></title><description><![CDATA[<p>Over the past 8 years, I have been extensively working with Nodejs on the backend developing various types of I/O intensive web applications. I have developed tens of small scale and large scale services that are handling 1000&apos;s of operations in no time and are surfacing applications</p>]]></description><link>https://prashantb.me/why-i-moved-from-nodejs-to-go/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bc1</guid><category><![CDATA[GO]]></category><category><![CDATA[nodejs]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Sun, 28 May 2023 06:48:34 GMT</pubDate><content:encoded><![CDATA[<p>Over the past 8 years, I have been extensively working with Nodejs on the backend developing various types of I/O intensive web applications. I have developed tens of small scale and large scale services that are handling 1000&apos;s of operations in no time and are surfacing applications across various product domains.</p><p>I was fascinated by the handling of files in nodejs. Allowing to pipe streams and manipulate chunks is extremely powerful. So far, no language has shown the ease and the speed with which these operations are carried out. Sometime back I decided to give GO a go simply because have been hearing really good stuff about it from fellow engineers. The results, to my surprise were drastically different. I knew GO will perform better but as I put the system under load, it became clear, GO simply beats nodejs by a fair margin.</p><p>Let&apos;s try to build a CPU intensive program and compare how both perform. Consider a factorial program. Before moving forward, heres what a factorial is - </p><blockquote>In mathematic the <strong>factorial</strong> of a non-negative integer denoted by <strong>n! </strong>is the product of all positive integers less than or equal to <strong>n</strong>. The factorial of <strong>n</strong> also equals the product of <strong> n </strong>with the next smaller factorial:<br><br>n!=n&#xD7;(n&#x2212;1)&#xD7;(n&#x2212;2)&#xD7;(n&#x2212;3)&#xD7;&#x22EF;&#xD7;3&#xD7;2&#xD7;1=n&#xD7;(n&#x2212;1)!<br><br>For example, 5! = 5&#xD7;4! = 5&#xD7;4&#xD7;3&#xD7;2&#xD7;1 = 120. <br>The value of 0! is 1.</blockquote><p>Heres how it can be written in GO (assuming recursion is understood.)</p><pre><code class="language-java">package main

import (
	&quot;fmt&quot;
	&quot;time&quot;
)

func computeFactorial(n int) int {
	if n &lt;= 1 {
		return 1
	}
	return n * computeFactorial(n-1)
}

func main() {
	start := time.Now()
	result := computeFactorial(20)
	elapsed := time.Since(start)
	fmt.Printf(&quot;Result: %d\n&quot;, result)
	fmt.Printf(&quot;Time taken: %s\n&quot;, elapsed)
}</code></pre><p>In this example, we calculate the factorial of a number (<em>20</em>) using a recursive function. Go&apos;s efficient concurrency model and compiled nature allow it to handle CPU-bound tasks like this more efficiently. On a 6 Core Machine, heres the output - </p><pre><code>Result: 2432902008176640000
Time taken: 207ns</code></pre><p>NodeJS Example - </p><pre><code class="language-javascript">function computeFactorial(n) {
	if (n &lt;= 1) {
		return 1;
	}
	return n * computeFactorial(n - 1);
}

const start = process.hrtime.bigint();
const result = computeFactorial(20);
const elapsed = process.hrtime.bigint() - start;
console.log(&quot;Result:&quot;, result);
console.log(&quot;Time taken:&quot;, elapsed, &quot;ns&quot;);
</code></pre><p>In Nodejs, the same factorial computation is performed using a recursive function. Here is the result on a 6core machine - </p><pre><code>Result: 2432902008176640000
Time taken: 17309n ns</code></pre><p>However, JavaScript&apos;s single-threaded event loop and interpreted nature can result in slower performance compared to Go for CPU-bound tasks.</p><p>The difference in performance becomes more noticeable as the computational complexity increases or when running multiple parallel computations. Go&apos;s ability to leverage multiple cores efficiently and its compiled nature provide an advantage for CPU-bound tasks, making it a favorable choice in such scenarios.</p><p>It is so evident from the above, for a very simple computation use case, GO is not just easy, but is also pretty optimum. I have compiled some common reasons on why GO is so much better than nodejs - </p><ol><li><strong>Performance:</strong> Go is known for its high performance and efficiency. It compiles to machine code, which allows it to execute faster than interpreted languages like JavaScript, which is used in Node.js. Go&apos;s lightweight goroutines and built-in concurrency features also make it highly scalable and efficient in handling concurrent tasks.</li><li><strong>Concurrency and Parallelism:</strong> Go has excellent support for concurrency and parallelism. Goroutines and channels in Go enable concurrent programming with ease. It allows developers to efficiently handle multiple requests simultaneously, making it well-suited for building scalable and high-performance backend systems.</li><li><strong>Static Typing:</strong> Go is a statically typed language, meaning it performs type checking at compile-time. This helps catch errors early in the development process, making it easier to write reliable and maintainable code. In contrast, Node.js uses JavaScript, which is dynamically typed, allowing more flexibility but potentially leading to runtime errors.</li><li><strong>Strong Standard Library:</strong> Go comes with a rich standard library that provides a wide range of functionality out of the box. It includes packages for networking, encryption, HTTP, file handling, and more. This allows developers to rely on the standard library without having to rely heavily on external dependencies.</li><li><strong>Concurrency Safety:</strong> Go has built-in features like goroutines and channels, which make it easy to write concurrent code that is less prone to race conditions and other common concurrency issues. Node.js, on the other hand, requires additional effort and the use of external libraries to achieve similar levels of concurrency safety.</li><li><strong>Deployment and Execution:</strong> Go compiles to a single binary that can be easily deployed and executed on various platforms without requiring the installation of any additional runtime dependencies. Node.js applications, on the other hand, require the Node.js runtime environment to be installed on the target server, which adds complexity to deployment.</li></ol><p>More practical examples to follow soon.</p>]]></content:encoded></item><item><title><![CDATA[Build a Facebook Messenger Bot App]]></title><description><![CDATA[build a facebook messenger bot from scratch in minutes. Configure answers, questions and other types of replies. This is built using TypeScript, NodeJs, Express]]></description><link>https://prashantb.me/build-a-facebook-messenger-bot-app/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bbd</guid><category><![CDATA[typescript]]></category><category><![CDATA[Javascript]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[node]]></category><category><![CDATA[bot]]></category><category><![CDATA[facebook]]></category><category><![CDATA[messenger]]></category><category><![CDATA[express]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Mon, 01 Oct 2018 04:35:39 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://prashantb.me/content/images/2019/01/fb_bot.gif" alt="fb_bot" loading="lazy"></p>
<p>Here is the source code <a href="https://github.com/prashantban/Facebook-Bot-App?ref=prashantb.me" target="_blank"> {Github Link}</a></p>
<p>The goal of this post is to showcase how easy it is to create a Facebbok Messenger Bot which is capable of handling almost all of your customer&apos;s queries or questions etc. Lets dive into the code.</p>
<h1 id="installation">Installation</h1>
<pre><code>git clone https://github.com/prashantban/Facebook-Bot-App
cd Facebook-Bot-App
npm install
npm build (To build app)
npm run (To run app)
</code></pre>
<p>Before we begin running this app, we need coupld of things from <code>Facebook Developer</code> . Here is the list of things needed -</p>
<pre><code>Facebook Page 
Facebook Developer App
Access Token
App Secret
Verify Token
</code></pre>
<p>Here is the link for <a href="http://https//developers.facebook.com/apps/?ref=prashantb.me">Facebook Developer Panel</a>. Now follow the instruction to create a new Facebook Messenger App.</p>
<pre><code>Add a product Messenger
Select your FB Page and generate the token. This will be your access token.
Then you need to setup webhook endpoint. For this, we will have to write some code and expose a Webhook endpoint on our app.
</code></pre>
<p><img src="https://prashantb.me/content/images/2019/01/Screen-Shot-2019-01-09-at-12.30.04-AM.png" alt="Screen-Shot-2019-01-09-at-12.30.04-AM" loading="lazy"></p>
<h1 id="webhookendpoint">Webhook Endpoint</h1>
<p>Facebook expects an endpoint in your system where messages will be posted. But prior to posting messages, facebook makes a <code>GET</code> call to the same api to verify the access token. Remember, above we <code>Verify Token</code>, put some random string in your <code>.env</code> file for now.</p>
<pre><code># Any secret to be kept

APP_SECRET=&apos;&lt;Your App Secret&gt;&apos;
ACCESS_TOKEN=&apos;Access Token Copied from Your FB Page&apos;
VERIFY_TOKEN=&apos;my random string&apos;
</code></pre>
<p>We will leave this for now, and get started with our code. This project will be built in <code>TypeScript</code>. To those who have little to no idea about <code>TypeScript</code>, can read <a href="https://www.typescriptlang.org/?ref=prashantb.me">here</a>. Here are the following things need to be present -</p>
<pre><code class="language-javascript">nodejs
npm
typescript
express
</code></pre>
<p>Obviously we have done <code>npm install</code>, so I assume all these things are available. Here is how our folder structure will be -</p>
<p><img src="https://prashantb.me/content/images/2019/01/Screen-Shot-2019-01-09-at-12.26.48-AM-1.png" alt="Screen-Shot-2019-01-09-at-12.26.48-AM-1" loading="lazy"></p>
<p>As simple as it can get, here is the code for our bot.</p>
<pre><code class="language-javascript">&quot;use strict&quot;;
import express from &quot;express&quot;;
const router = express.Router()
import logger from &quot;../util/logger&quot;;
import BootBot from &apos;bootbot&apos;;
import {Payload} from &quot;../types/payloadTypes&quot;;
import {Chat} from &quot;../types/chatTypes&quot;;
import {helpFunc} from &quot;../modules/help&quot;;
import {genreFunc} from &quot;../modules/genre&quot;;
import {greetFunc} from &quot;../modules/greet&quot;;

const bot = new BootBot({
    accessToken: process.env.ACCESS_TOKEN,
    verifyToken: process.env.VERIFY_TOKEN,
    appSecret: process.env.APP_SECRET
});

/**
 * Define Home page route
 * Not using this here though
 */
router.get(&apos;/&apos;, function (_req, res) {
    res.send(&apos;Thanks for Checking Us Out&apos;);
});

/**
 * This will get triggered as and when any one
 * opens the chat window for the first time
 */
bot.setGreetingText(&quot;Hello, Welcome to XYZ Page. Lets start by knowing your favorite genre.&quot;);

/**
 * Log the message we receive.
 * All the message with type `message`
 * will appear in this function
 */
bot.on(&apos;message&apos;, (payload : Payload, _chat : Chat) =&gt; {
    logger.info({&quot;module&quot;: &quot;Api Controller&quot;, &quot;message&quot;: payload.message.text, &quot;details&quot;: payload});
});

/**
 * Log the Attachment message we receive.
 * All the message with type `Attachment message`
 * will appear in this function
 */
bot.on(&apos;attachment&apos;, (payload : Payload, chat : Chat) =&gt; {
    logger.info({&quot;module&quot;: &quot;Api Controller&quot;, &quot;message&quot;: &quot;Recieved Attachment&quot;, &quot;details&quot;: payload});
    chat.say(&apos;We do not support this message type&apos;);
});

/**
 * Assign Bot Modules
 */
bot.module(greetFunc);
bot.module(helpFunc);
bot.module(genreFunc);

/**
 * Start the Bot
 */
bot.start(process.env.BOTPORT);

export default router;
</code></pre>
<p>To explain, FB supports several different types of messages, for example - <code>Normal Message, Greeting Text, Quick Reply, Options etc </code>, so instead of creating the structure of each of those, we are using a library named <code>bootbot</code>. This library has a pretty neat interface to define what type of message to be sent. Note, for instantiating an instance of <code>bootbot</code>, we are passing all the necessary client details to it at the top.</p>
<p>Let me put up the <code>greetFunc</code> module as well, to give more context on how a message is programmed.</p>
<pre><code class="language-javascript">import {Payload} from &quot;../types/payloadTypes&quot;;
import {Chat} from &quot;../types/chatTypes&quot;;
import logger from &quot;../util/logger&quot;;

export const greetFunc = (bot : any) =&gt; {

    bot.hear([&apos;hello&apos;, &apos;sup&apos;, /hey( there)?/i], (payload : Payload, chat : Chat) =&gt; {
        logger.info({&quot;module&quot;: &quot;Api Controller&quot;, &quot;message&quot;: payload.message.text, &quot;details&quot;: chat});
        chat.say(&apos;Hello, human friend!&apos;).then(() =&gt; {
            chat.say(&apos;Please say genre to get the list of genres to search from&apos;, { typing: true });
        });
    });

    bot.hear([/(good)?bye/i, /see (ya|you)/i, &apos;adios&apos;, &apos;get lost&apos;, &apos;thankyou&apos;], (_payload : Payload, chat : Chat) =&gt; {
        chat.say(&apos;Bye, human!&apos;);
    });

};
</code></pre>
<p>All we are doing is defining a set of array with possible values and as soon as a visitor sends any of these values, we respond back with the defined message.</p>
<p>Assuming, that now its pretty staright forward code, lets come back to the <code>webhook verification</code> part. First, lets follow the following steps -</p>
<pre><code>npm build
npm run
./ngrok http 3000
</code></pre>
<p>Note, we are starting our app in port 8080, but are forwarding port 3000 as <code>bootbot</code> creates a subserver which by default utilizes port 3000.<br>
Now, go back to Facebook Developer Webhook page, select <code>Create Subscription</code>, enter the <code>ngrok</code> or <code>localtunnel</code> or <code>forward</code> url, paste the <code>secret string defined above</code> and hit verify. Once facebook is able to hit the endpoint, it will send a <code>challenge</code> which will be verified against your app secret.</p>
<p>Thats it, we are pretty much done. You can start the conversation with the bot.</p>
<p>As part of extension, you can use some sort of Database to store the User and probably the received messages, run some analytics to improve the bot responses. This has a very different challenge, but I leave that to you to determine.</p>
<p>Here is the source code again <a href="https://github.com/prashantban/Facebook-Bot-App?ref=prashantb.me" target="_blank"> {Github Link}</a></p>
<p>Feel free to contact for any suggestion - dev[at]prashantb[dot]me</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Authenticating Express React App - Part2]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As we discussed in <a href="https://prashantb.me/authenticating-express-react-app-part-1/">Part 1</a> on how to do the client side part of the token based authentication in <code>Express</code> <code>React</code> app, this post will focus on server side of the authentication. So lets get started -</p>
<h6 id="exploringtheidea">Exploring the Idea</h6>
<p>Server side has multiple responsibilities. Below are listed almost</p>]]></description><link>https://prashantb.me/authenticating-express-react-app-part2/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bbb</guid><category><![CDATA[node]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[express]]></category><category><![CDATA[react]]></category><category><![CDATA[reactjs]]></category><category><![CDATA[flux]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 19 Apr 2017 19:27:52 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As we discussed in <a href="https://prashantb.me/authenticating-express-react-app-part-1/">Part 1</a> on how to do the client side part of the token based authentication in <code>Express</code> <code>React</code> app, this post will focus on server side of the authentication. So lets get started -</p>
<h6 id="exploringtheidea">Exploring the Idea</h6>
<p>Server side has multiple responsibilities. Below are listed almost all of them -</p>
<ul>
<li>Generate a <code>HS256</code> encrypted token with minimal required User details and a Private Key.</li>
<li>Have a way to Call Login/Logout/Register routes.</li>
<li>Use in memory DB ( <code>Redis</code> in our case ) to persist the token, have a defined <code>TTL</code></li>
<li>Call the DB to validate the token with every call that wants access to any private data.</li>
<li>Invalidate token/user if token is invalid/expired and allow the client side to immediately delete all the stores and logout the user.</li>
</ul>
<p>Here are certain advantages of the above approach -</p>
<ul>
<li>Anything in memory is much faster than reading disk. With this, you can reach to a minimum latency.</li>
<li>Keeping your authenticator separate from main server offers several security advantages.</li>
<li>Most important, You now have Separation of concerns. Client side is independent and hence is faster than ever, server need not care about session or user any more and the in memory DB fetches result very quickly.</li>
<li>There are leaps of other advantages, I recommend reading <a href="http://jonatan.nilsson.is/stateless-tokens-with-jwt/?ref=prashantb.me">Stateless Tokens</a></li>
</ul>
<p>Lets get to do some coding then -</p>
<p>Here is what my package file looks like</p>
<pre><code class="language-javascript">&quot;dependencies&quot;: {
    &quot;body-parser&quot;: &quot;^1.17.1&quot;,
    &quot;express&quot;: &quot;^4.15.2&quot;,
    &quot;flux&quot;: &quot;^3.1.2&quot;,
    &quot;jwt-decode&quot;: &quot;^2.2.0&quot;,
    &quot;jwt-redis-session&quot;: &quot;^1.0.5&quot;,
    &quot;morgan&quot;: &quot;^1.8.1&quot;,
    &quot;react&quot;: &quot;^15.5.0&quot;,
    &quot;react-dom&quot;: &quot;^15.5.0&quot;,
    &quot;react-router&quot;: &quot;^3.0.2&quot;,
    &quot;redis&quot;: &quot;^2.7.1&quot;,
    &quot;request&quot;: &quot;^2.81.0&quot;
  },
</code></pre>
<p>Next is our Routes file. This file handles the POST calls to Login and Logout. Its straight forward code, should be easy to understand.</p>
<pre><code class="language-javascript">
/** Login Route
 * Create session in redis
 * @return token
 */
router.post(&apos;/user/login&apos;, (req, res) =&gt; {

  const options = {
    url: &apos;API Server URL&apos;,
    body: req.body,
  };

  request.post(options, (error, response, body) =&gt; {
    if (!error &amp;&amp; response.statusCode &gt;= 200 &amp;&amp; response.statusCode &lt;= 304) {

      // this will be attached to the JWT
      var claims = {
        user: body.user,
      };
      // create session &amp; return the token
      req.jwtSession.create(claims, (error, token) =&gt; {
        res.json({
          access_token: token
        });
      });
    }

    // Error Occured
    else {
      res.status(500);
    }
  });
});

</code></pre>
<p>To give more context, I am using <code>Redis</code> to store the required user details and generate a token with the <code>Redis Key</code> which is passed to the client side for persistence. Now, every Api call can be verified by decrypting the token, getting the ID and making a query to redis to check if its valid or not. If it is invalid, send a response in such a way so that the client can immediately log out the user.</p>
<p><a href="https://github.com/prashantban/Auth?ref=prashantb.me">You can always grab the code from GITHUB</a>. Share your comments or if you feel a better way, raise an issue in git or you want to collaborate, then raise a pull request.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Authenticating Express React App - Part 1]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><code>React Apps</code> of any nature will have certain sections that are visible to users who have registered in your app and have logged in. Consider an E-Commerce app, where users can do everything except checkout without being logged in. So how to build such a system with as little pain</p>]]></description><link>https://prashantb.me/authenticating-express-react-app-part-1/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bba</guid><category><![CDATA[node]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[react]]></category><category><![CDATA[reactjs]]></category><category><![CDATA[token]]></category><category><![CDATA[authentication]]></category><category><![CDATA[redis]]></category><category><![CDATA[flux]]></category><category><![CDATA[express]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Sun, 09 Apr 2017 19:03:20 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><code>React Apps</code> of any nature will have certain sections that are visible to users who have registered in your app and have logged in. Consider an E-Commerce app, where users can do everything except checkout without being logged in. So how to build such a system with as little pain to the user and as much security to your app as possible. Lets build an <code>Express</code> app with <code>React &amp; Flux</code> driving the frontend. We will focus on <a href="https://www.w3.org/2001/sw/Europe/events/foaf-galway/papers/fp/token_based_authentication/?ref=prashantb.me"><code>Token Based Authentication</code></a> in this article.</p>
<p>Adding Authentication to any App is not something very complex, but here I&apos;ll be putting up an approach that has worked out for me with very little learning. We will see how to put this up with React&apos;s Flux pattern. The idea is as following -</p>
<ul>
<li>User enters username and password. On success, receives a Token.</li>
<li>Token is stored in Local Storage.</li>
<li>State of React Parent Component is populated with necessary user details from Token.</li>
<li>On hard refresh, the parent component is refreshed, Token is read and again state is populated with necessary re-directions.</li>
<li>On logout, flush out the state and delete Token.</li>
<li>Part II of this tutorial will contain the server level actions and how the token could be generated, validated etc.</li>
</ul>
<h6 id="requiredpackage">Required Package</h6>
<p>Here is how my <code>package.json</code> looks like. The major packages are <code>React, React-Dom, Express, Flux, Jwt-Decode, React-Router</code></p>
<pre><code class="language-javascript">&quot;dependencies&quot;: {
    &quot;express&quot;: &quot;^4.14.1&quot;,
    &quot;flux&quot;: &quot;^3.1.2&quot;,
    &quot;react&quot;: &quot;^15.5.3&quot;,
    &quot;react-dom&quot;: &quot;^15.5.3&quot;,
    &quot;react-router&quot;: &quot;^3.0.2&quot;,
    &quot;jwt-decode&quot;: &quot;^2.2.0&quot;,
},
</code></pre>
<h6 id="understandingflux">Understanding Flux</h6>
<p>As facebook docs say &quot;Flux is the application architecture that Facebook uses for building client-side web applications. It complements React&apos;s composable view components by utilizing a unidirectional data flow. It&apos;s more of a pattern rather than a formal framework, and you can start using Flux immediately without a lot of new code.&quot;<br>
Nothing more to add from my side on this. Here is an overhaul of the flux process in our app.</p>
<p><img src="https://prashantb.me/content/images/2017/04/React-Auth.png" alt="React Auth" loading="lazy"></p>
<p>Lets Get started with Login Container -</p>
<pre><code class="language-javascript">// LoginContainer.js
export default class Login extends React.Component {

  constructor() {
    this.state = {
      user: &apos;&apos;,
      password: &apos;&apos;,
      state: &apos;&apos;,
    };
  }

  // Here we handle the Login Event
  login(event) {
    event.preventDefault();
    Auth.login(this.state.user, this.state.password)
      .catch((err) =&gt; {
        this.setState(function(prevState, props){
          return {status: &quot;Error&quot;}
        });
      });
   }
  }

  render() {
    return (
      &lt;div&gt;
        &lt;h1&gt;Login {this.state.status}&lt;/h1&gt;
        &lt;form&gt;
        &lt;div&gt;
          &lt;label htmlFor=&quot;username&quot;&gt;Username&lt;/label&gt;
          &lt;input type=&quot;text&quot; value={this.state.user} name=&quot;user&quot; placeholder=&quot;UserName&quot; required=&quot;required&quot;/&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;label htmlFor=&quot;password&quot;&gt;Password&lt;/label&gt;
          &lt;input type=&quot;password&quot; value={this.state.password} name=&quot;password&quot; placeholder=&quot;Password&quot; required=&quot;required&quot; /&gt;
        &lt;/div&gt;
        &lt;button type=&quot;submit&quot; onClick={this.login.bind(this)}&gt;Submit&lt;/button&gt;
      &lt;/form&gt;
    &lt;/div&gt;
    );
  }
}
</code></pre>
<p>Next is Our Auth Utility File</p>
<pre><code class="language-javascript">// AuthService.js
class AuthService {

  login(email, password) {
    const options = {
      url: &apos;http://localhost:9000/user/login&apos;,
      method: &apos;POST&apos;,
      body: JSON.stringify({
        &quot;email&quot;: email,
        &quot;password&quot;: password
      })
    };
    return new Promise((resolve, reject) =&gt; {
      request(options, (error, response, body) =&gt; {
        if(response.statusCode &gt;= 200 &amp;&amp;  response.statusCode &lt;= 304) {
          body = JSON.parse(body);
          if(body.access_granted)  
            resolve(loginUser(body.token));
          else reject(&quot;Email/Pass is Invalid&quot;);
        }
        else reject(&quot;Email/Pass is Invalid&quot;);
      })
    });
  }
}

export default new AuthService()
</code></pre>
<pre><code class="language-javascript">// Login Action
export function loginUser(token, pathname) {
  Dispatcher.handleAction({
    type: &quot;LOGIN_USER&quot;,
    data: token,
  });

  if(pathname) {
    browserHistory.push(pathname);
  }
  else {
    localStorage.setItem(&apos;token&apos;, token);
    browserHistory.push(&quot;/&quot;);
  }
}
</code></pre>
<p>Couple of things to notice above. We are having a parameter pathname, this does not get passed when the user tries to login because everytime User Logs in, we want the user to go to homepage. Every other time, we will redirect user to the page where he came from.</p>
<p><code>Now lets write the Routes</code></p>
<p>We are using <code>react-router</code> to handle the routes. Here is how my routes.js file look like -</p>
<pre><code class="language-javascript">// Routes.js
const Routes = () =&gt; (
  &lt;Router&gt;
    &lt;Route path=&quot;/&quot; component={App}&gt;
      {/* Login Route */}
      &lt;Route path=&quot;login&quot; component={Login} /&gt;
      &lt;Route component={EnsureLoggedInContainer}&gt;
        &lt;Route path=&quot;home&quot; component={Home}/&gt;
      &lt;/Route&gt;
    &lt;/Route&gt;
  &lt;/Router&gt;
);
export default Routes;
</code></pre>
<p>The job of the EnsureLoggedInContainer is to listen for navigation to a nested route and ensure that the user is logged in. If the user is logged in, the component does nothing and simply renders its children (the requested route). If the user is not logged in, EnsureLoggedInConatainer should record the current URL for the purposes of later redirection, and then direct users to the login page.</p>
<pre><code class="language-javascript">// EnsureLoggedInContainer.js
class EnsureLoggedInContainer extends React.Component {
   constructor(props) {
      super(props)
      this.state = this.getCurrentState();
    }
    getCurrentState() {
      return {
        userLoggedIn: UserStore.isLoggedIn()
      };
    }
    componentDidMount() {      
      if (!this.state.userLoggedIn) {
        browserHistory.push(&quot;/user/login&quot;)
      }
    }
    render() {
      return this.props.children
  }
}
export default EnsureLoggedInContainer;

</code></pre>
<p>The UserStore, like any other store, has 2 functions:</p>
<ul>
<li>
<p>It holds the data it gets from the actions. In our case, that data will be used by all components that need to display the user information.</p>
</li>
<li>
<p>It inherits from EventEmmiter. It&#x2019;ll emit a change event every time its data changes so that Components can be rendered again.</p>
</li>
</ul>
<pre><code class="language-javascript">// UserStore.js
class UserStoreClass extends EventEmitter {

  constructor() {
    super();
    this.token = null;
    this.dispatchToken = AppDispatcher.register(this.actionDispatcher.bind(this));
    this.user = null;
    this.isLoggedIn = false;
  }
  addChangeListener(callback) {
    this.on(&apos;change&apos;, callback);
  }
  removeChangeListener(callback) {
    this.removeListener(&apos;change&apos;, callback);
  }
  getToken() {
    return this.token;
  }
  getUser() {
    return this.user;
  }
  isLoggedIn() {
    return this.isLoggedIn;
  }
  emitChange() {
    this.emit(&apos;change&apos;);
  }
  
  // Registering the Dispatcher
  actionDispatcher(payload) {
    switch (payload.action.type) {

      case &apos;LOGIN_USER&apos;:
        const token = action.data;
        this.token = token;
        this.user = jwtDecode(token);
        this.isLoggedIn = true;
        this.emitChange();
        break;
    }
  }
}
export default new UserStoreClass();

</code></pre>
<p>Now, lets call any API. We will always have access to the user Data in UserStore.<br>
Here is how you can attach the token to the API -</p>
<pre><code class="language-javascript">const token = UserStore.getToken();
fetch(url, {
  headers: {
    &apos;token&apos; : token
  },
})
</code></pre>
<h5 id="conclusion">Conclusion</h5>
<p>We have implemented Token Based Authentication in React App. In Part II, I will be going through how we can handle the server events, add <code>Redis</code> to the flow, Generate token and persist it for any future validation. In the mean time <a href="https://github.com/prashantban/Auth?ref=prashantb.me">grab the code from GITHUB</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[JavaScript Call Stack, Event Loop and Callbacks]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Straight getting into what the title says, a <code>call stack</code> is simply a stack that the javascript run-time maintains to go through all the calls defined in the code. The concept is exactly of a stack where the execution starts from the last function entering the stack and goes on</p>]]></description><link>https://prashantb.me/javascript-call-stack-event-loop-and-callbacks/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb9</guid><category><![CDATA[Javascript]]></category><category><![CDATA[callstack]]></category><category><![CDATA[event loop]]></category><category><![CDATA[callback]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Wed, 18 Jan 2017 19:27:50 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Straight getting into what the title says, a <code>call stack</code> is simply a stack that the javascript run-time maintains to go through all the calls defined in the code. The concept is exactly of a stack where the execution starts from the last function entering the stack and goes on till the first function. This is how all the synchronous languages work, but hey, <code>isn&apos;t JavaScript a non-blocking single threaded implementation?</code> lets learn in detail about this.</p>
<h6 id="javascriptcallstack">JavaScript Call Stack</h6>
<p>V8, Chromes runtime engine comprises of a Stack and a Heap. Heap is used to allocate memory and stack to do the function calling. Apart from these, a browser consists of <code>Web Api&apos;s, Event Loop, and a Callback Queue.</code> Here is a small description -</p>
<p><img src="https://prashantb.me/content/images/2017/01/js_runtime.png" alt="JS Runtime" loading="lazy"></p>
<ul>
<li>Web Api&apos;s - Calls such as Ajax, SetTimeOut, Events etc. These are not part of the V8 engine, instead browser provides support for this.</li>
<li>CallBack Queue - All the async callback functions or events such as Onclick, or Scroll etc are pushed to this queue. (Detail is provided below)</li>
<li>Event Loop - This peace has one function to look in the stack and queue and if the stack is empty, push the contents from callback queue to the stack.</li>
</ul>
<p>Lets understand the Stack with the following code</p>
<pre><code class="language-javascript">let square = function(a, b) {
    return multiply(a, b);
}

let multiply = function(a, b) {
    return product(a, b);
}

let product = function(a, b) {
    return a * b;
}

square(10, 10); // 100

</code></pre>
<p>Here is the stack trace ( <code>console.trace()</code> )</p>
<pre><code class="language-javascript">product
multiply
square
&lt;anonymous&gt;
</code></pre>
<p>Pretty simple, isn&apos;t it. Everything is synchronous. Now lets consider an even simpler code -</p>
<pre><code class="language-javascript">console.log(&quot;first&quot;);
setTimeout(function() {
    console.log(&quot;second&quot;)
}, 0);
console.log(&quot;third&quot;);
console.trace();
</code></pre>
<p>Here is the output -</p>
<pre><code class="language-javascript">first
third
console.trace()
second
</code></pre>
<p>What? Even though we are doing setTimeout of 0 seconds, still it was executed at the end.<br>
So, here comes the async nature of Javascript. SetTimeOut doesn&apos;t belong to the runtime environment and is part of Web Api provided by browser. The Stack gets everything in order, but setTimeOut goes to webapi, the timer runs for 0 seconds and instead of putting it back to the stack, it puts it to the <code>callback queue</code> and the stack carries on executing the next call. As soon as the stack gets empty, <code>Event Loop</code> comes into the picture and takes the first element in Callback Queue and passes it to the stack and hence it gets executed.</p>
<p>Note - setTimeout does not guarantee execution in the given time, instead it guarantees the minimum time to execute.</p>
<p>All the web Api&apos;s work in the same way. Any ajax call made gets resolved by the browser by calling XHR and the callback is passed to the queue, but the program continues execution and as soon as stack is clear, the function is passed to the stack and the execution continues.</p>
<p>Check out this cool <a href="https://www.youtube.com/watch?v=QyUFheng6J0&amp;ref=prashantb.me">talk</a> which explains about the back concepts of javascript.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Know ES6 Better!]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I have been working around with <code>ES6</code> for long time now and I would like to share some interesting things that I found useful in my day to day work life. You will find lots of resources describing new features, but are you really willing to accept all the syntax!</p>]]></description><link>https://prashantb.me/know-es6-better/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb7</guid><category><![CDATA[Javascript]]></category><category><![CDATA[JavaScript ES6]]></category><category><![CDATA[es6]]></category><category><![CDATA[map]]></category><category><![CDATA[set]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Sun, 17 Jul 2016 19:50:18 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I have been working around with <code>ES6</code> for long time now and I would like to share some interesting things that I found useful in my day to day work life. You will find lots of resources describing new features, but are you really willing to accept all the syntax! May be not. So, let me get straight to my take -</p>
<h5 id="1arguments">1. Arguments</h5>
<p>Consider a simple case of writing a program to add all the numbers provided as argument. Here were your options prior to ES6 -</p>
<pre><code class="language-javascript">function getSum() {
    var sum = 0;
    for (var i = 0; i &lt; arguments.length; i++) {
        sum += arguments[i];
    }
    return sum;
}

// Functional JS Approach
function getSum() {
    return [].reduce.call(arguments, function(a,b){return a+b;});
}
</code></pre>
<p>ES6 introduces a concept which I remember was taught to me in C++ class and that is variadic arguments. Now we have arguments directly as array, Lets see the code -</p>
<pre><code class="language-javascript">const getSum = (...args) =&gt; {
    return args.reduce((a, b) =&gt; a + b);
}
</code></pre>
<h5 id="2destructuring">2. Destructuring</h5>
<p>This thing has saved lots of time while coding. When you are dealing with objects or arrays with multiple properties or with multiple arguments or even if you are not sure about the arguments ordering, <code>Destructuring</code> will help you. Lets see what you can do with it -</p>
<pre><code class="language-javascript">var obj = {foo: 1, foo2: 2}
var { foo, foo2 } = obj;
console.log(foo); // Logs 1

// With Array
var arr = [&apos;tony&apos;,&apos;greg&apos;];
var [ fname, lname ] = arr;
console.log(fname);
</code></pre>
<p>So basically when you have large name objects and you want to destructure the properties, you can use this simple declaration. I feel in a way its pretty much similar to macros in c++ (highly debatable though). Following is a more deep example -</p>
<pre><code class="language-javascript">function calBMI({
    weight,
    height,
    max = 25,
    callback
}) {
    var bmi = weight / Math.pow(height, 2);
    if (bmi &gt; max) {
        console.log(&quot;You are OverWeight&quot;);
    } else console.log(bmi);
    if (callback) {
        callback(bmi);
    }
}

callBMI({
    weight,
    height
});
callBMI({
    weight,
    height,
    max: 30
});
callBMI({
    weight,
    height,
    max: 30,
    callback: function(x) {
        Do Something
    }
});
</code></pre>
<p>With this, you do not have to worry about argument ordering or missing arguments. Very very useful concept.</p>
<h5 id="3defaultfunctionparameters">3. Default Function Parameters</h5>
<p>This was probably much awaited concept. Previously we had to write if statements to check if the value was passed or not to give it a default value. Now we can do following -</p>
<pre><code class="language-javascript">let retFullName = (firstName, lastName = &apos;Cozy&apos;) =&gt;
    return firstName + &apos; &apos; + lastName;
retFullName(&apos;Tony&apos;) // Tony Cozy
</code></pre>
<h5 id="4mapdatastructure">4. MAP (Data Structure)</h5>
<p>JavaScript never had an explicit Map data structure but it had associative arrays that did the job very well. The Map object is a simple key/value collection. Any value (both objects and primitive values) may be used as either a key or a value. This data structure actually helps because it can take an <code>iterable</code> as a parameter which allows to create your own behavior of iteration.</p>
<p>Note - Internally this is different from conventional maps. Normal iteration will lead to getting elements in insertion order.</p>
<pre><code class="language-javascript">const myMap = new Map();
const obj = {&apos;a&apos;:&apos;b&apos;, &apos;c&apos;:&apos;d&apos;};
myMap.set(&quot;name&quot;,&quot;tony&quot;);
myMap.set(&quot;marks&quot;,[50,20,30]);
myMap.set(obj, &quot;some value&quot;);
for (let [key, value] of myMap) {
  console.log(key + &quot; = &quot; + value);
}

// Output
name = tony
marks = 50,20,30
[object Object] = some value
</code></pre>
<h5 id="5setdatastructure">5. SET (Data Structure)</h5>
<p>Similar to map, sets are also available in es6. Recall, set data structure keeps only unique values and discards any duplicate values. Lets simply see an example -</p>
<pre><code class="language-javascript">const set = new Set();
set.add(&apos;a&apos;);
set.add(&apos;b&apos;);
set.add(&apos;a&apos;);

// Lets check the result
set.forEach((a) =&gt; console.log(a));
a
b

set.add({&apos;a&apos;: &apos;b&apos;});
set.add({&apos;a&apos;: &apos;b&apos;});
set.forEach((a) =&gt; console.log(a));

// Important to note this
a
b
Object { a: &quot;b&quot; }
Object { a: &quot;b&quot; }

</code></pre>
<p>As we see above, sets basically can take any type and will store unique values. However, in case of objects, even though I added two objects with same properties, sets treated them as unique objects because that is the JavaScript treats objects, it is unique for every instance.</p>
<h5 id="6customiterators">6. Custom Iterators</h5>
<p>Collections like map, set etc or even objects can have custom iterators as said above. Of-course you have the default iterators which simply iterates based on position or insertion order, there are cases when we want to iterate in defined order. You might want to know about Symbol.iterator, here is a link for detailed info - <a href="https://prashantb.me/iterators-in-javascript-es6/">link</a><br>
Lets create a map, put random values and then define an iterator to display sorted values.</p>
<pre><code class="language-javascript">const mp = new Map();
mp.set(10, 20);
mp.set(0, 12);
mp.set(3, &quot;name&quot;);

// Custom Iterator
mp[Symbol.iterator] = function() {
    let _this = this;
    let keys = null;
    let index = 0;

    return {
        next: function() {
            if (keys === null) {
                keys = [];
                for (let key of _this.keys()) {
                    keys.push(key);
                }
                keys.sort((a, b) =&gt; a - b);
            }

            return {
                value: _this.get(keys[index]),
                key: keys[index],
                done: index++ &gt;= keys.length
            };
        }
    }
}

let it = mp[Symbol.iterator]();
let res = it.next();
while (!res.done) {
    console.log(res.key + &quot; = &quot; + res.value);
    res = it.next();
}

// Output
0 = 12
3 = name
10 = 20
</code></pre>
<p>As always, please <a href="mailto:prashantban@gmail.com">Email</a> to provide your comments.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Design Patterns in JavaScript - Part 2]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Hello folks! A month back I wrote a post explaining how basic fundamental design patterns could be written in JavaScript language. I received few emails of appreciation as well as advice and on one such advice, I am sharing my experience of the current patterns which actually align with your</p>]]></description><link>https://prashantb.me/design-patterns-in-javascript-part-2/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb8</guid><category><![CDATA[Javascript]]></category><category><![CDATA[design patterns]]></category><category><![CDATA[design]]></category><category><![CDATA[AMD]]></category><category><![CDATA[Revealing Module]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Thu, 14 Jul 2016 17:45:49 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Hello folks! A month back I wrote a post explaining how basic fundamental design patterns could be written in JavaScript language. I received few emails of appreciation as well as advice and on one such advice, I am sharing my experience of the current patterns which actually align with your day to day development in JavaScript (this time it is specific to JavaScript). Lets get started -</p>
<h5 id="1revealingmodulepattern">1. Revealing Module Pattern</h5>
<p>Anyone who started learning programming would have encountered the term Object Oriented Programming and in it <strong>Encapsulation</strong>. JavaScript, in particular does not enjoy the liberty of <code>visibility modifiers</code> like other languages do, but we do want to implement Encapsulation. Now, this can be achieved through this <code>Revealing Module Pattern</code>. Here is how the core syntax is like -</p>
<pre><code class="language-javascript">var myReveal = (function() {
    Properties
    Functions
    return { Anything you want to be public, return in form of Object  }
})();
</code></pre>
<p>Usage -</p>
<pre><code class="language-javascript">myReveal.ReturnedObjectProperty
</code></pre>
<p>I will not suggest to not know about closures, but if you are not willing to use closures to hide methods or properties, this method is really very handy. Lets write a simple real world example using <code>WebSQL</code> to get more clear understanding about this -</p>
<script src="https://gist.github.com/prashantban/2d07a676a563691ae1aa1b24eddcb375.js"></script>
<h5 id="2asynchronousmoduledefinitionamd">2. Asynchronous Module Definition (AMD)</h5>
<p>This pattern is slowly getting hold of all the internet now, because as the title says <code>Asynchronous</code>. It actually is an Asynchronous pattern that takes the best out of all the world and yet loads stuff in parallel. This pattern ensures not only encapsulation, but also avoids Global Variables, Namespaces and it works very well within the browser. Lets quickly get into the basic syntax -</p>
<pre><code class="language-javascript">// Import all modules in a single location
define([&quot;libs/otherModuleName&quot;],
    function(otherModuleName) {
        // Export your module as an object
        return {
            myFunction: function() {
                otherModule.otherExportedFunction() + 1;
            }
        };
    }
);
</code></pre>
<p>This is not it, its just a beginning. Anyways, lets talk about the actual usage part of this. I shall be using <code>requireJs</code>, it is a popular script loader that supports AMD. Cool!, lets build something with this. We shall create a simple script that will load Jquery DatePicker in DOM.</p>
<pre><code class="language-javascript">requirejs([&apos;jquery&apos;, &apos;bootstrap&apos;, &apos;jq-datepicker&apos;],
    function($) {
        $(&apos;.datepicker&apos;).datepicker({
            format: &apos;dd/mm/yyyy&apos;,
            language: &apos;da&apos;,
            keyboardNavigation: false,
            autoclose: true
        });
    });

// HTML Code

&lt;div class=&apos;datepicker&apos;&gt;&lt;/div&gt;

</code></pre>
<p>Simple, isn&apos;t it. Lets talk about advantages of this style -</p>
<ol>
<li>I am always sure that the dependencies are loaded before the defined function.</li>
<li>I can simply extend this function to do all stuff required.</li>
<li>None of the variables are global, hence no issue of messing up your design in browser.</li>
</ol>
<p>AMD is definitely my favorite and long back I wrote a simple script that converts a flat json into a bootstrap table. Here is the <a href="https://github.com/prashantban/JSON-to-Bootstrap-Table?ref=prashantb.me">link</a></p>
<p>That&apos;s it folks for now!</p>
<p>As always, please <a href="mailto:prashantban@gmail.com">Email</a> to provide your comments.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Design Patterns in JavaScript]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Lets look into constructing few of the common design patterns in JavaScript by using Object Oriented Code. We will be discussing one pattern from each of the three common types - <code>Creational - Singleton</code>, <code>Behavioral - Observer</code> and <code>Structural - Decorator</code> pattern. Before getting to code, lets see what is</p>]]></description><link>https://prashantb.me/design-patterns-in-javascript/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb6</guid><category><![CDATA[Javascript]]></category><category><![CDATA[design patterns]]></category><category><![CDATA[singleton]]></category><category><![CDATA[observer]]></category><category><![CDATA[decorator]]></category><category><![CDATA[design]]></category><category><![CDATA[pattern]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Mon, 25 Apr 2016 22:36:01 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Lets look into constructing few of the common design patterns in JavaScript by using Object Oriented Code. We will be discussing one pattern from each of the three common types - <code>Creational - Singleton</code>, <code>Behavioral - Observer</code> and <code>Structural - Decorator</code> pattern. Before getting to code, lets see what is a Design Pattern.</p>
<h6 id="whatisadesignpattern">What is a Design Pattern ?</h6>
<p>A design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code but a description or template for how to solve a problem that can be used in many different situations. [Source : Wikipedia]</p>
<h6 id="singletonpattern">Singleton Pattern</h6>
<p><code>Singleton pattern</code> is a design pattern that restricts the instantiation of a class to one object. We may need to do this to make sure there is only one resource on which any action is taking place. For example, a Logger class - here we will always want every thread/process to write things in a single log file and without any over-writing. Another example would be publishing scores in a game. So how to go about creating a class for this particular requirement. Here is my take -</p>
<script src="https://gist.github.com/prashantban/8ba9adca5b2eaf8cfbc7f1cc1e3ae91d.js"></script>
<p>The code above looks clean and simple. All we are doing is making sure whichever way we create the object, It will always point to a single instance.</p>
<h6 id="observerpattern">Observer Pattern</h6>
<p><code>Observer pattern</code> is a design pattern in which an object, called the Observer, maintains a list of its dependents, called Subscribers, and notifies them automatically of any state changes, usually by calling one of their methods. This pattern is very useful and is basically the creme logic behind an <mark>MVC Pattern</mark> where the view represents Subscriber and Model represents Observable. There are tons of other applications and a simple google on it will enlist all. Lets dive into writing the code -</p>
<script src="https://gist.github.com/prashantban/746a872f13beb781155d2c7af699a188.js"></script>
<p>We have a class named <mark>Observable</mark> which maintains a list of <mark>Subscribers</mark>. In <mark>Observable Prototype</mark>, we have simple functions to Subscribe, Unsubscribe, Publish and ShowSubscribers. I do not think more description is required to explain each as the code is pretty simple, shoot me an <a href="mailto:prashantban@gmail.com">Email</a> if you want this to be explained. Here is the output of the above code -</p>
<pre><code>Subscriber Tony Registered.
Subscriber Anthony Registered.
Subscriber Martial Registered.
Tony received the message - Stock Price gets Cheap
Anthony received the message - Stock Price gets Cheap
Martial received the message - Stock Price gets Cheap
Subscriber - 1 : Name is - Tony
Subscriber - 2 : Name is - Anthony
Subscriber - 3 : Name is - Martial
Subscriber Anthony UnRegistered.
Tony received the message - Stock Price gets Higher
Martial received the message - Stock Price gets Higher
</code></pre>
<h6 id="decoratorpattern">Decorator Pattern</h6>
<p><code>Decorator pattern</code> is a design pattern that allows behavior to be added to an individual object, either statically or dynamically, without affecting the behavior of other objects from the same class. This pattern is part of <mark>Structural Design Patterns</mark> and it is very easy to understand what actually is happening because we deal with extension of classes all the time. Specifically <mark>Decorator</mark> creates a class that extends a particular feature and adds extra information to it. Examples of this pattern involve GUI Libraries, that require addition of components almost every time. Here is a simple example of Pizza -</p>
<script src="https://gist.github.com/prashantban/8b3b76fdbaec1d827281e7cdddc9f416.js"></script>
<p>Here, a decorator function is available that gives the option of adding cheese to pizza. Once added, the price changes dynamically. Here is the output -</p>
<pre><code>Initial Price of Pizza - 10
After Adding Extra Cheese, New Price of Pizza - 11
</code></pre>
<p>That is it folks. Will be back with more interesting articles. In the mean time, please <a href="mailto:prashantban@gmail.com">EMAIL</a>  to send your feedback.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Working with Java Spark Framework]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Creating a Rest based full stack application requires quite a bit of hands on knowledge of Back end services as well as Front end services. In this post, we are looking on Java micro web framework named <mark><a href="http://sparkjava.com/?ref=prashantb.me">Spark</a></mark>. We will build a rest based system where we can do stuff</p>]]></description><link>https://prashantb.me/working-with-java-spark-framework/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb5</guid><category><![CDATA[Javascript]]></category><category><![CDATA[java]]></category><category><![CDATA[spark]]></category><category><![CDATA[junit]]></category><category><![CDATA[freemarker template]]></category><category><![CDATA[ftl]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Tue, 26 Jan 2016 19:40:12 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Creating a Rest based full stack application requires quite a bit of hands on knowledge of Back end services as well as Front end services. In this post, we are looking on Java micro web framework named <mark><a href="http://sparkjava.com/?ref=prashantb.me">Spark</a></mark>. We will build a rest based system where we can do stuff with User Data.</p>
<h3 id="requirementstogetalong">Requirements to get along</h3>
<p><mark>Spark</mark> is probably the easiest framework available to build a micro project. It removes the configuration hassles required while working with <mark>Spring or JSP</mark> etc. Lets get to the business. We would use <mark>Java 8 and Maven</mark> before starting with the application. To install Java 8 and Maven, you can follow the following steps -</p>
<h4 id="installjava8">Install Java 8</h4>
<pre><code>On Ubuntu
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer 
$ sudo apt-get install oracle-java8-set-default

On Mac, One can directly download JDK 8 from Oracle Website 

Check Java Version
$ java -version
java version &quot;1.8.0_72&quot;

</code></pre>
<h4 id="installmaven">Install Maven</h4>
<pre><code>$ sudo apt-get install maven
</code></pre>
<p>It will take a while to install.</p>
<h3 id="creatingthepackage">Creating the Package</h3>
<p>A Maven application requires creating certain configuration files. One such file is <mark>Pom.xml</mark>. Whether you use <mark>NetBeans</mark> or <mark>Eclipse</mark> , Pom.xml is automatically created. We will require to edit this file in order to get started. <mark>Spark</mark> is required to be included in our package, So add the following dependencies in pom.xml</p>
<pre><code>&lt;dependencies&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;com.sparkjava&lt;/groupId&gt;
        &lt;artifactId&gt;spark-core&lt;/artifactId&gt;
        &lt;version&gt;2.3&lt;/version&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;com.sparkjava&lt;/groupId&gt;
        &lt;artifactId&gt;spark-template-freemarker&lt;/artifactId&gt;
        &lt;version&gt;2.3&lt;/version&gt;
    &lt;/dependency&gt;
    &lt;dependency
        &lt;groupId&gt;com.fasterxml.jackson.core&lt;/groupId&gt;
         &lt;artifactId&gt;jackson-core&lt;/artifactId&gt;
         &lt;version&gt;2.5.1&lt;/version&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
    &lt;groupId&gt;com.fasterxml.jackson.core&lt;/groupId&gt;
          &lt;artifactId&gt;jackson-databind&lt;/artifactId&gt;
          &lt;version&gt;2.5.1&lt;/version&gt;
    &lt;/dependency&gt;
&lt;/dependencies&gt;
</code></pre>
<ul>
<li>spark-core - Java Spark Framework</li>
<li>spark-template-freemarker - Free Marker Template for Front End</li>
<li>jackson-core - Basic JSON streaming API implementation</li>
</ul>
<h3 id="folderstructure">Folder Structure</h3>
<p>The folder structure must look like below -<br>
<img src="https://prashantb.me/content/images/2016/01/folder.jpg" alt="Folder Structure" loading="lazy"></p>
<h3 id="mainclass">Main class</h3>
<p>As all the MVC frameworks, Spark also requires writing <mark>Routes</mark> with get, put, post, delete methods. In our application, we want to first get import stuffs in our Driver Class.</p>
<pre><code>import TemplateEngine.FreeMarkerEngine;
import java.io.IOException;
import static spark.Spark.*;
import spark.ModelAndView;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import java.io.StringWriter;
import java.util.HashMap;
import java.util.Map;

public class MainClass {
    
    /**
     *  Entry Point
     * @param args
     */
    public static void main(String[] args) {
        MainClass s = new MainClass();
        s.init();
    }
</code></pre>
<p>Now we shall write the <mark>init()</mark> function to define our first route. Here is the code -</p>
<pre><code>private void init() {
        get(&quot;/&quot;, (request, response) -&gt; {
           Map&lt;String, Object&gt; viewObjects = new HashMap&lt;String, Object&gt;();
           viewObjects.put(&quot;title&quot;, &quot;Welcome to Spark Project&quot;);
           viewObjects.put(&quot;templateName&quot;, &quot;home.ftl&quot;);
           return new ModelAndView(viewObjects, &quot;main.ftl&quot;);
        }, new FreeMarkerEngine());
}
</code></pre>
<h4 id="frontend">Front End</h4>
<p>As described, we now need to write the .ftl file which will contain our Html code. We shall use a Bootstrap starter template to get started.</p>
<pre><code>&lt;html&gt;
    &lt;head&gt;
        &lt;title&gt;Spark Project&lt;/title&gt;
        &lt;link rel=&quot;stylesheet&quot; href=&quot;css/bootstrap.min.css&quot;&gt;
        &lt;link rel=&quot;stylesheet&quot; href=&quot;css/bootstrap-theme.min.css&quot;&gt;
        &lt;link rel=&quot;stylesheet&quot; href=&quot;css/starter-template.css&quot;&gt;
    &lt;/head&gt;
    &lt;body&gt;

        &lt;div class=&quot;navbar navbar-inverse navbar-fixed-top&quot; role=&quot;navigation&quot;&gt;
            &lt;div class=&quot;container&quot;&gt;
                &lt;div class=&quot;navbar-header&quot;&gt;
                    &lt;button type=&quot;button&quot; class=&quot;navbar-toggle&quot; data-toggle=&quot;collapse&quot; data-target=&quot;.navbar-collapse&quot;&gt;
                        &lt;span class=&quot;sr-only&quot;&gt;Toggle navigation&lt;/span&gt;
                        &lt;span class=&quot;icon-bar&quot;&gt;&lt;/span&gt;
                        &lt;span class=&quot;icon-bar&quot;&gt;&lt;/span&gt;
                        &lt;span class=&quot;icon-bar&quot;&gt;&lt;/span&gt;
                    &lt;/button&gt;
                    &lt;a class=&quot;navbar-brand&quot; href=&quot;#&quot;&gt;&lt;/a&gt;
                &lt;/div&gt;
                &lt;div class=&quot;collapse navbar-collapse&quot;&gt;
                    &lt;ul class=&quot;nav navbar-nav&quot;&gt;
                        &lt;li class=&quot;active&quot;&gt;&lt;a href=&quot;/&quot;&gt;Home&lt;/a&gt;&lt;/li&gt;
                        &lt;li&gt;&lt;a href=&quot;createUser&quot;&gt;Create User&lt;/a&gt;&lt;/li&gt;
                        &lt;li&gt;&lt;a href=&quot;getAllUsers&quot;&gt;Get All Users&lt;/a&gt;&lt;/li&gt;
                        &lt;li&gt;&lt;a href=&quot;updateUser&quot;&gt;Update User&lt;/a&gt;&lt;/li&gt;
                        &lt;li&gt;&lt;a href=&quot;removeUser&quot;&gt;Remove User&lt;/a&gt;&lt;/li&gt;
                    &lt;/ul&gt;
                &lt;/div&gt;&lt;!--/.nav-collapse --&gt;
            &lt;/div&gt;
        &lt;/div&gt;
        &lt;script src=&quot;js/jquery.min.js&quot;&gt;&lt;/script&gt;
        &lt;script src=&quot;js/bootstrap.min.js&quot;&gt;&lt;/script&gt;
        &lt;div class=&quot;container&quot;&gt;
            &lt;#include &quot;${templateName}&quot;&gt;
        &lt;/div&gt;
    &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>Now create your home file.</p>
<pre><code>&lt;div class=&quot;starter-template&quot;&gt;
   &lt;h2&gt;${title}&lt;/h2&gt;
&lt;/div&gt;
</code></pre>
<h3 id="runningtheapplication">Running the Application</h3>
<p>In order to run the application, we need to add certain code to our <mark>POM.XML</mark> file to let maven know what to do. Here is the code -</p>
<pre><code>&lt;name&gt;SparkProject&lt;/name&gt;
    &lt;build&gt;  
        &lt;plugins&gt;  
            &lt;plugin&gt;  
                &lt;groupId&gt;org.codehaus.mojo&lt;/groupId&gt;  
                &lt;artifactId&gt;exec-maven-plugin&lt;/artifactId&gt;  
                &lt;version&gt;1.2.1&lt;/version&gt;  
                &lt;executions&gt;  
                    &lt;execution&gt;  
                    &lt;phase&gt;test&lt;/phase&gt;  
                    &lt;goals&gt;  
                    &lt;goal&gt;java&lt;/goal&gt;  
                    &lt;/goals&gt;  
                    &lt;configuration&gt;  
                    &lt;mainClass&gt;Driver.MainClass&lt;/mainClass&gt;  
                    &lt;/configuration&gt;  
                    &lt;/execution&gt;  
                &lt;/executions&gt;  
            &lt;/plugin&gt;  
        &lt;/plugins&gt;  
    &lt;/build&gt;
</code></pre>
<p>Now we can run -</p>
<pre><code>mvn clean install
navigate to http://localhost:4567/
</code></pre>
<p>You should see the app running.</p>
<h3 id="restframework">Rest Framework</h3>
<p>We want to make our application restful, hence we need to deal with JSON for almost everything. I&apos;ll demonstrate on how we can delete a user. Check the below code -</p>
<pre><code>get(&quot;/removeUser&quot;, (request, response) -&gt; {
    Map&lt;String, Object&gt; viewObjects = new HashMap&lt;String, Object&gt;();
    viewObjects.put(&quot;templateName&quot;, &quot;removeUser.ftl&quot;); 
    viewObjects.put(&quot;users&quot;, toJSON(mod.sendUsersId()));
    response.status(200);
    return new ModelAndView(viewObjects, &quot;main.ftl&quot;);
        }, new FreeMarkerEngine());

put(&quot;/removeUser/:id&quot;, (request, response) -&gt; {
     String id = request.params(&quot;:id&quot;);
     if(mod.removeUser(id)) {
         response.status(200);
         return &quot;User Removed&quot;;
     }
     else {
         response.status(404);
         return &quot;No Such User Found&quot;;
     }
});
</code></pre>
<p>Here, we are using same url but with two different purpose. When you hit <mark>/removeUser/</mark> , it renders the removeuser template, but when you create a put request on <mark>/removeUser/SomeID</mark>, that SomeID is checked in the database and if found, it is deleted else a simple message is sent back. See, how easily we can play with the response codes.<br>
<a href="https://github.com/prashantban/Java-Spark-FTL?ref=prashantb.me">Click here to see the full code</a></p>
<h3 id="testcases">Test Cases</h3>
<p>Java Spark can easily be integrated with JUnit library. We shall write Unit test cases for our application. It&#x2019;s simple because we do not need to mock anything related to Spark. We just need to mock our Model, which represents the way we access the database. However it has a simple interface and mocking it is straightforward. Let&#x2019;s look at some examples.</p>
<pre><code>import Model.Model;
import User.User;
import org.junit.Test;
import static org.junit.Assert.*;
import org.easymock.EasyMock;
import static org.easymock.EasyMock.*;

public class CreateUserTest {
    
    public CreateUserTest() {}
    
    @Test
    public void aUserIsNotValid() {
        User usr = new User();
        usr.setId(&quot;T12&quot;);
        usr.setFirstName(&quot;Test&quot;);
        usr.setAge(10);
        usr.setGender(&apos;X&apos;);
        usr.setLastName(&quot;Test&quot;);
        usr.setPhone(&quot;122&quot;);
        assertTrue(!usr.isValid());
    }
    
    @Test
    public void aUserIsCorrectlyCreated() {
        User usr = new User();
        usr.setId(&quot;T12&quot;);
        usr.setFirstName(&quot;Test&quot;);
        usr.setAge(10);
        usr.setGender(&apos;M&apos;);
        usr.setLastName(&quot;Test&quot;);
        usr.setPhone(&quot;1234567891&quot;);
        assertTrue(usr.isValid());
        
        Model model = EasyMock.createMock(Model.class);
        expect(model.createUser(&quot;T13&quot;, &quot;Test&quot;,&quot;&quot;,&quot;Test&quot;,20,&apos;M&apos;,&quot;1234567891&quot;,12 
        )).andReturn(1);
    }

}
</code></pre>
<p>As you can see, it is so easy to check each and every unit of our code.</p>
<h3 id="conclusion">Conclusion</h3>
<p>You can very well follow this <a href="https://github.com/prashantban/Java-Spark-FTL?ref=prashantb.me">GitHub link</a> to all the code.<br>
I hope this post gave a handful insight in writing a Full Stack rest based application with Java. Feel free to send comments and corrections as well as suggestions.</p>
<h3 id="importantlinks">Important Links</h3>
<ul>
<li><mark><a href="http://sparkjava.com/?ref=prashantb.me">Spark</a></mark></li>
<li><mark><a href="https://github.com/prashantban/Java-Spark-FTL?ref=prashantb.me">App Code</a></mark></li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Create and publish your first Node JS package]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Creating a node.js package and further publishing it for other people to use it is pretty simple process. Through this post I shall walk through the process by creating a small library and publishing it.</p>
<p>Before diving into writing the code and creating the package, make sure you have</p>]]></description><link>https://prashantb.me/creating-and-publish-your-first-node-js-package/</link><guid isPermaLink="false">65f6d24cd74fec1fb3af2bb4</guid><category><![CDATA[Javascript]]></category><category><![CDATA[library in Javascript]]></category><category><![CDATA[LinkedList]]></category><category><![CDATA[node]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[npm]]></category><category><![CDATA[node module]]></category><dc:creator><![CDATA[Prashant Bansal]]></dc:creator><pubDate>Mon, 25 Jan 2016 06:44:41 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Creating a node.js package and further publishing it for other people to use it is pretty simple process. Through this post I shall walk through the process by creating a small library and publishing it.</p>
<p>Before diving into writing the code and creating the package, make sure you have node.js and npm installed. If not, you can follow the below process -</p>
<h2 id="installnodejsandnpm">Install Node.js and npm</h2>
<pre><code class="language-javascript">Install Node, npm in Ubuntu - 
sudo apt-get install nodejs
sudo apt-get install npm
sudo ln -s /usr/bin/nodejs /usr/bin/node

Mac users can use Homebrew
brew install node
</code></pre>
<p>Assuming node and npm is installed, lets now get into the core business.</p>
<h2 id="configurenpm">configure NPM</h2>
<p>This configuration step is completely optional but I prefer to do so. This saves a lot of my time. The below code sets your personal detail and every time you create a new package, these details will be automatically set.</p>
<pre><code class="language-javascript">npm set init.author.name &quot;Prashant Bansal&quot;
npm set init.author.email &quot;prashantban@gmail.com&quot;
npm set init.author.url &quot;http://prashantb.me&quot; 
</code></pre>
<p>Great, so now we have the personal details set.</p>
<h2 id="creatinganodemodule">Creating a node module</h2>
<p>A <mark>Node/npm</mark> module is just an ordinary JavaScript file with the addition that it must follow the <mark>CommonJS</mark> module spec. Luckily, this is really not as complex as it sounds. Node modules run in their own scope so that they do not conflict with other modules. Node relatedly provides access to some globals to help facilitate module interoperability. The primary 2 items that we are concerned with here are require and exports. You require other modules that you wish to use in your code and your module exports anything that should be exposed publicly. For example:</p>
<pre><code class="language-javascript">var require = require(&apos;some_module&apos;);
module.exports = function() {
    console.log(require.doSomething());
}
</code></pre>
<p>In our demo, we are going to build a Linkedlist Library.This will be a simple library which will be able to create a Linkedlist datastructure and also be able to define needful functions. If you are preparing for Interviews with Tech companies, you should go through this library and also be able to create your own functions to do certain tasks like reversing a linked list or finding an element or removing it etc. What we need is to first create an empty repository in github.<br>
If you do not want to create , you can very well clone my repository.</p>
<pre><code>git clone git@github.com:prashantban/ll-js.git
cd ll-js
</code></pre>
<p>Next up, we shall start our node module. Its easy, just type -</p>
<pre><code>npm init
</code></pre>
<p>This will initiate your node module and will ask certain questions. These will be part of your <mark>package.json</mark> file. You can follow my <mark>package.json</mark> file to know the key and value required.</p>
<pre><code>{
  &quot;name&quot;: &quot;ll-js&quot;,
  &quot;version&quot;: &quot;0.1.0&quot;,
  &quot;description&quot;: &quot;JS Library for LinkedList&quot;,
  &quot;main&quot;: &quot;index.js&quot;,
  &quot;scripts&quot;: {
    &quot;test&quot;: &quot;./node_modules/.bin/mocha --reporter spec&quot;
  },
  &quot;repository&quot;: {
    &quot;type&quot;: &quot;git&quot;,
    &quot;url&quot;: &quot;https://github.com/prashantban/ll-js.git&quot;
  },
  &quot;keywords&quot;: [
    &quot;js&quot;,
    &quot;linkedlist&quot;
  ],
  &quot;author&quot;: &quot;Prashant Bansal &lt;prashantban@gmail.com&gt; (http://prashantb.me/)&quot;,
  &quot;license&quot;: &quot;MIT&quot;,  
  &quot;bugs&quot;: {
    &quot;url&quot;: &quot;https://github.com/prashantban/ll-js/issues&quot;
  }
}
</code></pre>
<p>More or less, your <mark>package.json</mark> will look similar. Now is the time to move into our main code file - <mark>index.js</mark> . Here is the initial code we need -</p>
<pre><code>module.exports = (function() {

var Node,LinkedList;

Node = function (item) {
    this.item = item;
    this.next = null;
};

LinkedList = function () {
    this.head = new Node(&apos;head&apos;);
    this.size = 0;
};

LinkedList.prototype.insert = function(data) {
   if(this.find(data) === null) {
    	var cur_node = this.head;
        while (cur_node.next !== null) {
            cur_node = cur_node.next;
        }	
        var new_node = new Node(data);
        new_node.next = cur_node.next;
        cur_node.next = new_node;
        this.size += 1;
        return true;
    }
    else {
    	return false;
    }
};

LinkedList.prototype.show = function() {
    var cur_node = this.head;
    if(!cur_node.next) return &apos;&apos;;
    var out = [];
    while (cur_node.next !== null) {
        out.push(JSON.stringify(cur_node.next.item));
        cur_node = cur_node.next;
    }
    var res = out.join(&apos; --&gt; &apos;);
    return res;
};

return {
    LinkedList : LinkedList
};

})();
</code></pre>
<p><a href="https://github.com/prashantban/ll-js/blob/master/index.js?ref=prashantb.me">Full Library Code</a></p>
<p>Now is the time to write test cases. Node has a wonderful module named Mocha and Chai. As a beginner, it is probably the most easiest to write test cases in. Lets first install it in our library -</p>
<pre><code>npm install mocha --save-dev
npm install chai --save-dev
</code></pre>
<p>Similar to <mark>index.js</mark> file, we also need a file where we could write our test cases. I have created a <mark>test</mark> folder and inside it I have put another <mark>index.js</mark> file where we will write all our test cases.</p>
<p>without explaining the code, since I feel it is self explanatory, here it is -</p>
<pre><code>var should = require(&apos;chai&apos;).should(),
    ds = require(&apos;../index&apos;),
    LinkedList = ds.LinkedList;

    describe(&apos;LinkedList&apos; , function(){

		describe(&apos;init&apos; , function(){
			it(&apos;should create a Linkedlist Instance&apos; , function(){
				var obj = new LinkedList();
				obj.should.be.an(&apos;object&apos;);
				obj.size.should.equal(0);
				obj.head.item.should.equal(&apos;head&apos;);
				should.not.exist(obj.head.next);
			});
		});

		describe(&apos;show&apos;, function(){
			var emptylist, fullist;
			before(function() {
				emptylist = new LinkedList();
				fullist = new LinkedList();
				fullist.insert(1);
				fullist.insert(2);
				fullist.insert(3);
			});

			it(&apos;should create an emptylist&apos;, function() {
				emptylist.show().should.equal(&apos;&apos;);
			});

			it(&apos;should have only one element&apos;, function() {
				emptylist.insert(1);
				emptylist.show().should.equal(&apos;1&apos;);
			});

			it(&apos;should show 3 elements&apos;, function() {
				fullist.show().should.equal(&apos;1 --&gt; 2 --&gt; 3&apos;);
			});
		});

	});
</code></pre>
<p><a href="https://github.com/prashantban/ll-js/blob/master/test/index.js?ref=prashantb.me">Full Test Code</a></p>
<p>Now, to run the test cases, we only need to let our package.json file know about it. to do so, include the following in your <mark>package.json</mark> file -</p>
<pre><code>&quot;scripts&quot;: {
    &quot;test&quot;: &quot;./node_modules/.bin/mocha --reporter spec&quot;
  },
</code></pre>
<p>and now simply run - <code>npm test</code><br>
The results should be similar to as following -</p>
<pre><code>&gt; ll-js@0.1.2 test /home/bansal/Desktop/Node-Sll/ll-js
&gt; mocha --reporter spec

  LinkedList
    init
      &#x2713; should create a Linkedlist Instance
    show
      &#x2713; should create an emptylist
      &#x2713; should have only one element
      &#x2713; should show 3 elements
    remove
      &#x2713; should remove node with data as 3
      &#x2713; should remove first element
      &#x2713; should remove Last element
    insertAtHead
      &#x2713; should insert this data at head
      &#x2713; should show the data
    insert
      &#x2713; should insert this data at head
      &#x2713; should show the data
      &#x2713; should not allow insertion
    insertAtPosition
      &#x2713; should insert this data at head
      &#x2713; should show the data
      &#x2713; should not allow insertion
    Union
      &#x2713; should union the list and change the size
      &#x2713; should check for null of new list
    reverse
      &#x2713; should reverse the list
      &#x2713; should return false if empty or only 1 element is present


  19 passing (23ms)
</code></pre>
<p>Great, so now we have all our cases passed. We can be sure of our module now. All you need now is to make sure your git is up to date and has all updates. To do so, write the following lines -</p>
<pre><code>git tag 0.1.0
git add .
git commit
git push origin master --tags
</code></pre>
<h2 id="publishtonpm">Publish to NPM</h2>
<p>If you are confident that your package has none or very less issues, you could publish your code to NPM for other people to use it. In order to do so, simply write -</p>
<pre><code>npm publish
</code></pre>
<p>once your package is up on npm, one can install it directly by typing -</p>
<pre><code>npm install ll-js
</code></pre>
<p>Use the version management system of npm to provide updates. Following is the gist of the process -</p>
<ul>
<li>Edit your code.</li>
<li>Update the code in github.</li>
<li>Update the version in <mark>package.json</mark></li>
<li>publish using <code>npm publish</code></li>
</ul>
<p>Lastly, go find your module on the <a href="http://npmjs.org/?ref=prashantb.me">http://npmjs.org</a> website and share it with friends. Here&#x2019;s npm&#x2019;s <a href="https://www.npmjs.com/package/ll-js?ref=prashantb.me">LL-JS</a> page.</p>
<h2 id="relatedlinks">Related Links</h2>
<ul>
<li><a href="http://npmjs.org/?ref=prashantb.me">NPM JS</a></li>
<li><a href="https://github.com/prashantban/ll-js?ref=prashantb.me">LL-JS Github Page</a></li>
<li><a href="https://www.npmjs.com/package/ll-js?ref=prashantb.me">LL-JS NPM Page</a></li>
<li><a href="http://chaijs.com/guide/?ref=prashantb.me">ChaiJS testing Framework</a></li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>