-
Notifications
You must be signed in to change notification settings - Fork 1
Story
NTU was the initial/code name for the project (ImMobile)
Welcome to the story behind NTU (NT Universal) β my attempt to create a true universal environment that works even on the earliest Windows builds and provides a desktop-like experience on limited, often abandoned hardware.
This project is powered by C++ and ImGui β a combination chosen for raw performance and flexibility. I want to take you through the journey β the problems I faced, the technical trenches I fell into, and the solutions I came up with, often through sheer persistence rather than perfection.
This post from my conversations when I was in very early stages, most parts of NTU were only thoughts/ideas

UWP has been incredibly limitingβespecially on older Windows devices that were forced to use it exclusively.
Microsoftβs main direction has been pushing .NET and XAML, shifting app dependencies from OS-level libraries to frameworks. As a result, we ended up with apps that were completely incompatible with older Windows builds (14393 and below). I constantly ran into situations where even small XAML components would break entire apps, and there were no known fixes for the compatibility issues.
On C++/DirectX level most apps that designed for UWP perform general DPI calculations, which result in oversized frame rendering on weak SoCs. Porting them into a XAML-based UI made things worse. Older Windows builds had memory leaks in some components, making the app progressively heavier the longer it ran.
To make matters worse, newer XAML libraries demanded significantly more system resources.
Even more complicated was trying to port apps like WUT (Windows Universal Tool) to Windows 8.1 using C#/XAMLβit was nearly impossible.
Some essential .NET libraries were simply unavailable. I couldnβt even get a basic HTTP client working. Whether that was my fault or not, I didnβt face such issues with my ImMobile project, which used C++/DirectX/ImGui.
I spent years fixing and completing UWP apps for ARM32 that were abandoned by other developers.
Some of those efforts felt like a waste of time, especially with GPL-licensed projects where some owners were skeptical, thinking I was doing it for profit.
Others, like PPSSPP, and RetroArch, had friendly owners which was worth the efforts to have ARM32 port, but yet you may spend months and users may never know your name, this one of the reasons I dropped a lot of plans to push legacy support over the official repositories.
Some project owners didnβt seem to grasp the reality of investing precious time into legacy support. Itβs a huge riskβyour ideas and your work are out there in public, and we have 0% protection for even a single line of it.
Thatβs when I started to appreciate the MIT license. It promotes transparency and goodwill, but also encourages responsibility.
So, what now? The only real solution is to build your own thingβsomething based solely on whatβs necessary, leaving behind the outdated environment.
Originally, I never imagined it would grow this large. My initial idea was just to create a simple grid with a click-and-show window.
But it turns out, building an environmentβespecially using my own complex calculation methodsβwas far more challenging.
Since I taught myself C/C++, I knew that any project I create should be:
-
C++-friendly and easy to use
-
Allow others to focus on logic, while NTU handles the heavy lifting: UI, system access, and touch input
That wasβand still isβthe vision.
This was the first screenshot debug test of NTU (ImMobile)

at this moment I got that dialog and touch-pad..etc I though all will be easy and quick lol.
I had 2 projects coding on them in the same time, the main one Windows 10 (WinRT) and NTU for Windows 8.1 / Mobile (CX)
devenv_Z2b8EYjFfN.mp4
Generally after setting the bases during the early development I had to put 8.1 version on hold to boost the main one
it's hard and very much time consuming.. later it took nearly 30+ days (~2 month) to port the whole thing to 8.1 with more challenging issues.
Let me take you through the biggest pain points I encountered, the decisions I made, and the stuff that just wouldnβt work until it finally did.
Each section here is a chapter of what was happening behind the scenes.
The first major hurdle I hit? UWP file accessβespecially on Windows builds older than 14393.
Microsoft only allowed HANDLE from (StorageItem) starting from build 14393. Anything older than that? You're pretty much on your own.
So I had to implement two separate code paths:
- One for builds below 14393
- Another for builds 14393 and above
And the further back you go, the worse it gets. I had to come up with some really strange workarounds. Thankfully, I had prior experience from an earlier UWP storage projectβthat helped a lot.
Still, of course⦠it never goes smoothly.
In this kind of situation, you end up doing exactly what you were trying to avoid.
What does that mean?
It means you have to fall back to the basicsβthe old-school, legacy way of handling files. That means using things like fopen, fread, fwrite, and other file stream functions.
Most C/C++ codebases use those anyway. Some fall back to HANDLE (like libzip), but a 90% working solution is always better than something completely broken.
So I built a full emulation layer for fopen, fread, fwrite, and more, based on a fake FILE* wrapper that behaves just like the standard C API.

It was anything but easyβespecially for the functions that involve formatted output. I ran into weird issues that forced me to manually parse format strings.
This part of the project was long, difficult, and time-consuming. I had to dive deep and understand how each stream function really worksβthe exact kind of low-level detail I always avoided every time I saw those functions in a library.
UWP apps have a "Pause/OnSuspending" behavior β meaning the app (and its threads/render loop) gets paused whenever something else steals focus. This includes file pickers, which suspends the app until the user selects a file (on desktop this may differ).
As you'd expect, this introduces some bugs. For example, the file picker might close unexpectedly during file selection, and if the app doesnβt handle it properly, it may crash. Good luck debugging that.
So instead of relying on external pickers, why not do it the better way? Pick files directly in place β and manage them too.
I started with ImFileDialog, but had to modify it to make it usable in a mobile UWP context:
-
On vertical mobile screens, the layout was nearly unusable.
-
The on-screen keyboard would pop up and cover half the interface.
-
Async file loading was necessary to prevent UI freezes.
Here's what I added:
-
File previews
-
Asynchronous loading
-
Basic file operations (copy, delete, rename)
There were many other fixes and tweaks for the environment that I cannot remember.
One particularly tricky issue was navigation. On touchscreens, I don't like tap to open files directly, user may need to scroll, select before open.
Double-tap navigation was implemented and works, but due to varying frame rates and tap speeds, it wasnβt always reliable.
The solution? I made the left-side icon act as a button that opens a context menu β much more stable and touch-friendly, in tight area with a lot of files there is no empty space for the user to show the root context, so the root folder has sub menu button at the top so you can perform few tasks.
Showcase (Click Here)

I used to think touch input would be simple. One tap = one event, right?
Nope. It turned out to be one of the hardest parts.
Handling touch properly meant analyzing input across multiple frames. For example:
-
One tap? Might be a scroll.
-
Tap and hold? Maybe it's a context menu.
-
Two fingers? Possibly a zoom gesture.
I had to account for delays, offsets, visibility, and even the angle of your finger β because it can block what you're interacting with.
Touch behavior became an entire engine of its own.
Itβs still not perfect, but after a lot of pain, Iβd say it works around 70% correctly.
Iβve always believed: thereβs no magic or βuniversalβ solution to this stuff. At some point, you have to simulate the behavior manually, and handle it case by case.
In my WUT app, I had to write deep conditional logic just to handle navigation between pages. It may sound like a bad idea, but from a user experience perspective, itβs far better than relying on standard page navigation/caching systems.
Same applies here β Iβm sure platforms like Android and iOS have deeply researched and highly tuned engines to interpret touch behavior. Itβs not just "tap β event" β thereβs a lot going on under the hood.
Another tricky example: the touch keyboard (input pane).
In theory, when you tap an input field, it should open. And when you tap outside, it should close.
But in practice? Sometimes I donβt want it to open β like when I was scrolling and my finger just happened to lift off over an input field. Same with the start menu β I wasnβt trying to open it; I was just scrolling horizontally and landed over it by accident.
These things happen quickly, but they trigger unwanted actions. Some parts of the UI has NoInput flags to prevent them from stealing focus and for better touch control, of course, that introduced extra problems that need to be handled.
And then there's the opposite case: like in a terminal, where I want to copy text but also keep the keyboard visible because Iβm still typing. So I had to write separate behavior per scenario.
Even menus and submenus caused trouble. On vertical screens, a submenu might appear directly under your finger β which just touched the parent item. Because of release-delay handling, you might accidentally trigger the submenu as well.
Honestly, I could go on forever with touch-related headaches β but these are the kinds of problems I actually enjoy solving. After a lot of work, I finally reached a touch experience that feels... well, pretty fine.
Since this environment is becoming a standalone platform, I wanted it to feel more like a desktop experience β multiple windows, taskbar, minimize/maximize behavior, and all the management that comes with it.
Thatβs when you start to really appreciate how complex operating systems are when it comes to window and focus management.
Here were some key challenges I had to solve:
- Handling focus across multiple windows
- Maintaining layout integrity on mobile rotation
- Preventing minimized windows from wasting CPU by continuing to render
Each window includes a default menu system with basic controls β options like fullscreen, fit to top/bottom, and a File menu containing actions like Close (and potentially more). These controls are essential, and I decided to make them part of every window by default.
I had to manually check when a user intends to close a window, such as when they hit the back button. Only the window currently in focus should respond and close.
By default, windows should not render if theyβre not active (with some exceptions), to avoid wasting resources. So every window constantly checks whether itβs allowed to render before doing so.
But here's a tricky part: when a popup dialog appears, it steals focus. That would incorrectly mark the underlying window as inactive, even though it's technically still the parent. To solve this, I track whether a popup belongs to the current window. If it does, the window stays active and continues rendering.
Another issue I ran into was with certain windows (like configuration screens). When the window was reactivated from an inactive state, the scroll position would reset to the top. To fix this, each window maintains its own state object, storing details like:
- Whether it's minimized
- Current scroll position
- Whether it's rotated
- Custom flags and rendering behavior
One more subtle bug was related to tab interactions on touch devices. I had a workaround to enable easier tab switching via touch. But it introduced a weird edge case: if the current window was touched, and another window behind it had a tab located at the exact same screen position, the tab in the inactive window could get triggered.
I fixed this by adding stricter validation in the touch resolver β ensuring that only the currently focused window can respond to input.
Implementing minimize and maximize wasnβt simple either. There were a lot of edge cases that needed special handling β window positioning, input management, and focus recalculation.
To better control performance and behavior, I also added custom ImGui flags:
-
RenderAlwaysβ forces a window to render even when inactive. -
Render30FPSβ useful for heavy-content windows that donβt need 60fps updates. -
TouchPadβ allows touchpad to render over a window that has this flag only.
There is also the annoying touch keyboard part.
When an input field is focused, the touch keyboard should pop up. That works perfectly β as long as the input is near the top of the screen.
But if the input is located below the center, it gets completely blocked β especially in vertical orientation (donβt even ask about horizontal mode... it's worse).
To fix this, the input field needs to be scrolled into the visible area, above the keyboard. But thatβs not as straightforward as it sounds.
First, you need to resize the window layout to fit within the available space above the keyboard. Then, you have to scroll just the right amount to make the selected input fully visible β ideally, centered within the new viewable area.
After many calculations, I got this workaround to work about 90% reliably. But it was a challenge, because any interaction β even a small one β could dismiss the keyboard unexpectedly, which meant everything had to revert gracefully.
Showcase (Click Here)

This one of the reasons you would see files browser fields at the top.
On Android, this problem is often avoided by presenting the input in a dedicated fullscreen layout, showing only the keyboard and the input field. This keeps the rest of the UI untouched and predictable.
I followed a similar approach in the hex editor: when editing an input, a separate dialog pops up specifically for that field. It's much cleaner for complex input cases, and Iβm considering making it the default behavior β where a floating input dialog appears when you tap to edit text, ensuring a smoother and more consistent experience.
Showcase (Click Here)

Overall, itβs been a deep dive into UI architecture. Thereβs no one-size-fits-all solution β everything had to be carefully calculated and emulated to achieve the desktop-style behavior I wanted, especially on mobile and touch-based platforms.
This was the first component I started working on after resolving the main appβs rendering, rotation, and resizing behavior.
In my earlier WUT app, managing settings was frustrating. Even though I had a (partial) settings engine, all settings were manually defined. Every time I needed to add a new setting, I had to duplicate logic across both XAML and C#.
Looking back, it's clear how much easier this could have been β especially now, using ImGui for UI rendering. It allows for more streamlined and dynamic setting panels.
However, building a universal configuration engine isnβt easy. It needs to handle a wide variety of data types and support persistent storage.
Initially, I used the appβs built-in preferences storage, since at the time I didnβt know how to properly save configs to an INI file β I was still learning the depths of C++.
The basic features like saving and retrieving int, bool, and similar types were working well.
The goal of the configs engine was to allow building a settings page with minimal code. I also added helper utilities to make the process even smoother.

It was a time-consuming task β and my first serious attempt at developing a full C++ application. I learned a lot, especially about pointer-related issues, as the engine relied heavily on them for real-time synchronization of values.
Later, I got SimpleIni library added and it made the ini save/restore possible.
below example on settings pages, dynamically building/drawing:
Showcase (Click Here)

One of the first questions I asked myself back when I used Windows Phone was: "Why is there no built-in text editor or viewer?"
Because of that, I decided to include a default text viewer in my environment β so no one else would have to ask the same question.
When I started developing apps (around late 2019), I noticed how poorly some early versions of XAML/WinUI handled text.
Specifically, large text files loaded into input text components could cause serious memory leaks and performance issues β to the point where memory usage spiked and everything slowed down dramatically.
Now, donβt get me wrong β C#/XAML has its strengths and works great in many scenarios. This isnβt a criticism of the platform itself. But coming from a C#/XAML background, it felt nearly impossible to read and interact with large text files efficiently.
Thatβs when I came across ImGuiColorTextEdit. I had already been planning a more advanced text utility (as part of the NTU idea), and this seemed like a potential solution. Still, I had zero expectations for how fast C++/DirectX could be in practice.
But once I got it running, I was shocked β the performance was beyond anything I anticipated.
A smooth, colored, long-text editor running at 60 FPS on an ARM SoC β it felt like magic.
It completely changed how I viewed programming languages and any future UI framework I may use.
Of course, major credit goes to BalazsJako, the developer of ImGuiColorTextEdit. If the implementation hadnβt been done so well, none of this wouldβve been possible β the environment doesnβt make things "magically fast" on its own.
Example on memory usage, same file opened using ImMobile (CPP) and WUT (.NET)
beside ImMobile was having another extra file opened during the test
The XAML elements (specifically text input) demand way too much RAM that make the app lagging too much
not to mentioned the extra left over RAM that depends on the GC to clean up later,
this was very serious when we are dealing with device has limited hardware
After integrating ImGuiColorTextEdit, though, I encountered a new set of challenges β especially around mobile use cases:
- Touch input caused odd selection behavior.
- The on-screen keyboard (input pane) would cover the editor.
- Cursor visibility often required manual scrolling.
You can't always predict how users will interact with a text editor β whether for viewing or editing.
For instance, if the user was just scrolling through a file and the input pane suddenly appeared, theyβd be forced to tap outside of the text box β even though the text box itself might take up 90% of the screen.
These edge cases needed careful handling to avoid frustrating user experiences.
Another challenge was screen scaling. On devices with varying DPI settings, text size wasnβt always optimal. So I had to implement:
- Zoom in/out controls in the menus,
- Additional syntax highlighting themes,
- File save/open operations (UWP-compatible),
- A quick Markdown viewer with live updates,
- Bug fixes related to ARM memory heap behavior.
Despite all of this, it's one of the components I rarely had to revisit β it just worked. Huge thanks to BalazsJako β this widget was a real game-changer.
When I began integrating a few samples early in development, one of the first issues I encountered was that certain ImGui elements didnβt respond to tap or touch input.
After a bit of research, I found a reference from Omar (ImGui's author) explaining that tab interactions typically require a few frames of hover before a click is registered.
However, in a touch interface, there's no concept of hover. So in this context, a tab that's hovered via touch should effectively be treated as clicked.
To address this, I implemented a solution that has gradually evolved over time. It manually switches tabs as soon as they're hovered via touch.
It was a tricky workaround, especially since in many environments, the native ImGui input flow isnβt used directlyβinstead, input is processed and then pointer actions are simulated. This made it necessary to carefully manage the state to ensure consistent behavior.
Handling DPI scaling was one of the most important things I tackled early onβand Iβm very glad I did.
Without it, things would have quickly become chaotic across different devices, each potentially using its own DPI configuration.
It became even more complex once I added the ability for users to manually adjust the DPI scale, as well as to customize font scaling. These introduced additional layers of variability.
Fortunately, because the DPI and scaling logic were addressed early, it laid a strong foundation for creating a consistent and unified UI across all device types later in development.
ImGui doesnβt officially support keeping certain windows always on top, which makes sense given the frameworkβs design philosophy.
However, in our case, it was critical to ensure certain elementsβlike toolbars or input overlaysβremain above others. For example, when a user taps into a text box and needs access to copy/paste tools, those should always appear above all other windows.
Thankfully, ImGui provides an internal function to bring a window to the front without affecting input focus. But initially, I observed a flickerβwhen tapping, the window would briefly be pushed back, then brought forward again. This behavior was once described in an ImGui issue as βwindows fighting for z-index,β and it's difficult to resolve cleanly.
Eventually, I implemented a custom solution (involving some modifications to ImGui itself) to force specific windows to stay strictly on top. It now works as intended, with zero flicker.

To avoid blocking the UI, I designed an asynchronous queue system that processes tasks in the background.
Initially, I assumed that three queues would be sufficient; however, real-world usage quickly showed that at least five queues were required to properly handle actual workloads.
Even with five queues, there are still some edge cases that would benefit from a more dynamic solution. That said, the current approach is stable and effective for now.
The reason NTU requires multiple queues is that many tasks are directly tied to normal user behavior. For example, a user may browse files while assets are downloading, or the application may need to generate image or thumbnail previews on demand. Each of these small, micro-level processes needs to be dispatched to a different queue to prevent blocking or contention.
I intentionally avoided designing a system that allows unlimited background tasks, and I do not plan to do so. Allowing too many concurrent background processes would force users to wait longer, and on low-end ARM SoC hardware in particular, excessive async workloads can cause significant throttling. For instance, triggering multiple downloads must be strictly routed to the appropriate queue, and the same rule applies to file-related tasks.
One important aspect that was missing early in development was my lack of awareness of the atomic type (for bool, int, and similar primitives). Using atomics could have prevented many unsafe threading issues that caused ImMobile to crash when users rapidly performed complex operations.
I have since addressed many of these cases using atomic, but another refinement pass is still required in upcoming updates.
Overall, this async task engine has significantly improved the ability to process tasks dynamically using regular Win32 code, without requiring a transition to UWP. The API also gives developers control over task classification, allowing them to specify the task type (Download, Background, Instant, Files, and so on), which ensures predictable scheduling and better performance.
Mobile top/bottom screen areas are semi-dead zones (used for notifications or gestures). Touch often fails there.
I decided not to place anything critical in those spots, there are custom calculations to increase the delay of the pointer if the touch was on those places.
Mixed fonts can be a serious RAM killer. Initially, I assumed that adding FontAwesome would be a simple task.
In early testing, I loaded the icon font with the full glyph set. While running in debug mode, I first assumed the slow startup time was caused by debug symbols or related tooling. However, after nearly a week of ignoring this issue, I noticed that the application was consuming around 160 MB of RAM.
At that point, I still wasnβt certain of the root cause, as fonts were not yet on my radar. I then switched to detailed, line-by-line debugging to identify what was driving the excessive memory usage, and it turned out to be the font itself.
To resolve this, I changed the approach to load only the specific glyphs required for icons instead of the full FontAwesome set, which reduced memory usage to a very low level.
This optimization delayed the addition of support for more languages, since font handling needs to be integrated carefully to avoid similar memory issues.
For now, however, ImMobile provides the ability to add custom fonts with advanced and dynamic configuration options, allowing users to customize this aspect when needed.
Some issues can completely derail your workflow when youβre building something complex, forcing you to spend time studying problems you didnβt expect to deal with.
I added wallpaper image support and suddenly ran into an unexpected issue: camera photos were appearing flipped.
After some investigation, I discovered that EXIF orientation data exists and is not automatically handled by the OS or DirectX. Because of that, images captured by cameras or phones may require manual orientation correction.
To address this, I implemented logic to detect EXIF orientation tags, apply the required rotation, and cache the corrected result.
Later on, I found that WinRT already provides APIs that handle this scenario. At that point, it felt like the original solution was a waste of time. This really highlights the need for better WinRT documentation β especially now, in the AI era, where development tools and editors should be able to inform developers that WinRT can handle manual pixel transformations and image orientation internally. Instead, this kind of knowledge often remains buried or discovered only through trial and error.
Regardless, the feature works correctly now. Even very large images are properly processed, compressed, and cached.
TBC..
TBC..