<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Blogs on wh</title><link>https://nrehiew.github.io/blog/</link><description>Recent content in Blogs on wh</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://nrehiew.github.io/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>Coding Models Are Doing Too Much</title><link>https://nrehiew.github.io/blog/minimal_editing/</link><pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate><guid>https://nrehiew.github.io/blog/minimal_editing/</guid><description>Code for this post is available here.
AI-assisted coding has become the norm and with tools like Cursor, GitHub Copilot, Claude Code, Codex, we are increasingly letting models touch our code. If you have used any of these tools in the past year, you have probably experienced something like this: you ask the model to fix a simple bug (perhaps a single off-by-one error, or maybe a wrong operator). The model fixes the bug but half the function has been rewritten.</description></item><item><title>Evaluating Long Context (Reasoning) Ability</title><link>https://nrehiew.github.io/blog/long_context/</link><pubDate>Thu, 16 Oct 2025 00:00:00 +0000</pubDate><guid>https://nrehiew.github.io/blog/long_context/</guid><description>Pass@1 scores on the 128k subset of LongCodeEdit.
Reasoning models and long agent trajectories are eating up valuable space in the context window. In response, models are being released with ever-increasing context windows; the latest, Grok 4 Fast, has a 2 million token window.
Unfortunately, as anyone who has worked with these models knows, the number of tokens a model can accept as input is not the same as the number of tokens it can reason over.</description></item><item><title>Flow Matching in 5 Minutes</title><link>https://nrehiew.github.io/blog/flow_matching/</link><pubDate>Thu, 17 Jul 2025 00:00:00 +0000</pubDate><guid>https://nrehiew.github.io/blog/flow_matching/</guid><description>In this post, I will try to build an intuitive understanding to the Flow Matching, a framework used to train many state-of-the-art generative image models.
In generative modelling, we start with 2 probability distributions: (1) an easily sampled distribution $p_{\text{source}}$ (e.g. a Gaussian distribution) and (2) a target distribution $p_{target}$ containing data points (e.g. images). Our goal is to transform a point sampled from $p_{\text{source}}$ to a point that could have been reasonably sampled from $p_\text{target}$.</description></item><item><title>The State of Generative Models</title><link>https://nrehiew.github.io/blog/2024/</link><pubDate>Mon, 30 Dec 2024 00:00:00 +0000</pubDate><guid>https://nrehiew.github.io/blog/2024/</guid><description>In the face of disruptive technologies, moats created by closed source are temporary. Even OpenAI’s closed source approach can’t prevent others from catching up. So we anchor our value in our team — our colleagues grow through this process, accumulate know-how, and form an organization and culture capable of innovation. That’s our moat. - Liang Wenfeng, CEO of DeepSeek 2024 has been a great year for AI. In both text and image generation, we have seen tremendous step-function like improvements in model capabilities across the board.</description></item><item><title>Taking PyTorch for Granted</title><link>https://nrehiew.github.io/blog/pytorch/</link><pubDate>Thu, 04 Jul 2024 09:51:47 +0800</pubDate><guid>https://nrehiew.github.io/blog/pytorch/</guid><description>A while back I challenged myself to implement micrograd in Rust using only the standard library. Along the way, I thought that it would be a fun to attempt to implement a fully functioning Tensor library on top of micrograd. I thought that my familarity with PyTorch will make this easier but having to do so without the higher-level abstractions of Python turned out much harder than expected.
In this post, I hope to share some of my learnings throughout this process which forced me to think deeply about how PyTorch actually works under the hood.</description></item></channel></rss>