<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>blog on </title>
    <link>/tags/blog/</link>
    <description>Recent content in blog on </description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 27 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="/tags/blog/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>BioXComputing: How biology and nature can be taken as an inspiration for next generation computing platforms</title>
      <link>/posts/bioxcomputing_interview_early_stage_founder/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/bioxcomputing_interview_early_stage_founder/</guid>
      <description>Deep-Tech discussions with an early stage founder in BioxComputing domain. [Work In Progress]</description>
    </item>
    
    <item>
      <title>How do we even get there?- Hello World C to bare bones binaries running on OS </title>
      <link>/posts/hello_world_to_bare_bones_binary/</link>
      <pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/hello_world_to_bare_bones_binary/</guid>
      <description>STAGE 1: Preprocessing Command: gcc -E hello.c -o hello.i
What happens:
#include &amp;lt;stdio.h&amp;gt; gets replaced by the entire contents of stdio.h, thousands of lines of function declarations, type definitions, macros. printf is not defined yet, just declared. The preprocessor doesn&amp;rsquo;t know what printf does, just that it exists.
Output: hello.i, pure C code, no # directives, just expanded text. Roughly 800 lines for this tiny program.
// ... thousands of lines from stdio.</description>
    </item>
    
    <item>
      <title>How do syscalls know what they needs to execute and when and how?[INTERNALS]</title>
      <link>/posts/syscall_internals/</link>
      <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/syscall_internals/</guid>
      <description>There&amp;rsquo;s an interesting history for this post, I gave an interview at IBM for a compiler engineering position. The discussion was so fun and revolved around everything from C to Low level compiler and OS stuff including computer architecture level intricacies. We delved deep into the privilege escalation(user to kernel mode), runtime stack, and eventually syscalls, interrupts/TRAP. Basic question was the difference between an interrupt and a syscall which anyone can answer but it gets interesting when we start looking at the syscall as a main character.</description>
    </item>
    
    <item>
      <title>Understanding the processes-process communication in depth at OS level</title>
      <link>/posts/multiple_process_communication_in_os/</link>
      <pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/multiple_process_communication_in_os/</guid>
      <description>Understanding the processes-process communication in depth at OS level This is again coming from one of the interviews discussions at IBM for compiler role.</description>
    </item>
    
    <item>
      <title>Finding myself in C lang intricacies</title>
      <link>/posts/c_lang_intricacies/</link>
      <pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/c_lang_intricacies/</guid>
      <description>Over the past few years, through my work with Unikraft on unikernels, a compiler research internship at CERN/Berkeley Lab, hardware–software co-design at Vicharak Computers, and systems research for neuromorphic chips at IISc, I’ve been trying to find my place in the low-level world.
But I kept running into the same wall: a lack of deep understanding of C’s intricacies the kind that every serious low-level engineer or hacker is expected to master.</description>
    </item>
    
    <item>
      <title>Reverse Engineering 101</title>
      <link>/posts/reverse_engineering_101/</link>
      <pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/reverse_engineering_101/</guid>
      <description>When nothing goes right, you open your laptop and you end-up doing things that you would anyways do whatever the fk is going on in the world, for me it has been the love for understanding my computer. Reverse Engineering grounds me and improves my focus and it wouldn&amp;rsquo;t be wrong to call it as a &amp;ldquo;mindset&amp;rdquo; rather than a &amp;ldquo;discipline&amp;rdquo;.</description>
    </item>
    
    <item>
      <title>XPUs it is! Not GPUs: Architectures &amp;&amp; Energy Efficient Future</title>
      <link>/posts/xpus_not_gpus_energy_efficient_future/</link>
      <pubDate>Fri, 13 Feb 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/xpus_not_gpus_energy_efficient_future/</guid>
      <description>My remote interview experience with Co-founder/CEO of Zettascale Computing Corp(YCombinator funded deep-tech startup), this time it was XPUs not GPUs or TPUs.
Technical discussions- Architecture-level questions: -&amp;gt; Why are you even doing this? Impact? Real impact not how the next ChatGPT will be much more faster. → Creative models like BNNs or LGNs don&amp;rsquo;t work well on GPUs. → They quickly get disregarded. What about training BNNs on XPUs? Will we be able to train them on XPUs because on GPUs they are extremely slow?</description>
    </item>
    
    <item>
      <title>Figuring out low-level stack for novel architectures[PATH]</title>
      <link>/posts/figuring-out-low-level-stack-for-novel-architectures/</link>
      <pubDate>Sun, 04 Jan 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/figuring-out-low-level-stack-for-novel-architectures/</guid>
      <description>This isn&amp;rsquo;t just another roadmap but my personal low-level learning path(which will never end I know) that I have been following while figuring out complete end-to-end low-level stack for a novel computing architecture based on memristors in-memory brain-inspired chip.
Questions to answer: “How does userspace talk to hardware?” “Where do syscalls actually live?” “How would my neuromorphic accelerator appear to Linux?” “How does the kernel map physical devices to memory?” “How would I add a new instruction or device?</description>
    </item>
    
    <item>
      <title>A small primer on Logic Gates Networks[RESEARCH PROJECT]</title>
      <link>/projects/short_primer_on_logic_gate_networks/</link>
      <pubDate>Fri, 21 Mar 2025 00:00:00 +0000</pubDate>
      
      <guid>/projects/short_primer_on_logic_gate_networks/</guid>
      <description>A logic gate network is a neural network which consists of logic gates in the form of neurons. A traditional neural network consist of the following component:
neurons connections(weights) Every neuron has properties like bias and weights. During a NN training, these weights are learned and updated at every layer during back propagation.
In a logic gate network, instead of having a 32 bit floating point weights, we have a network which is weightless.</description>
    </item>
    
    <item>
      <title>Hacking LLVM IR[Internals]</title>
      <link>/posts/hacking_llvm_ir/</link>
      <pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate>
      
      <guid>/posts/hacking_llvm_ir/</guid>
      <description>Another interview motivation post, this time it was NVIDIA LLVM IR engineering intern position. They wanted me to have contributed to a real open source codebase specifically in LLVM/LLVM IR. So, here I will try to explain some of the interesting hacks about LLVM IR, experiments I did and what I learnt. I have learned this very hard way that it doesn&amp;rsquo;t matter if you know 100s of topics or concepts or have read dozens of technical blogs.</description>
    </item>
    
    <item>
      <title>Breaking down Unikernels and hosting a personal page on it</title>
      <link>/posts/breaking_down_unikernels/</link>
      <pubDate>Tue, 01 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>/posts/breaking_down_unikernels/</guid>
      <description></description>
    </item>
    
  </channel>
</rss>
