Why I Abandoned My 3D Game Engine
The Goal In Mind
Throughout 2025, I have been developing a 3D game engine that I called Nikola. And, in April of 2026 (as the time of this writing), I’m not anymore. So, what happened?
My main goal with Nikola was to develop a tool filled with just enough pieces so that I could make my games. I was not planning to compete with Unreal or Unity. I was not even planning to compete with the Fyrox Engine. I just wanted to make an engine that suited my needs. And my games are simple. They’re not huge RPGs. They are not complicated, big worlds, featuring the most bleeding-edge graphics. I wanted to only support Windows and Linux. Perhaps the Web, if I could get around to it, but it wasn’t a priority. My knowledge of graphics in general was small, and so I only knew OpenGL, so I decided to only support it as the main graphics API. Essentially, my goals were simple, and my ambitions were kept small.
Despite that, though, I lost focus and ballooned the engine, adding features that I really did not need. Additionally, even though my goals were simple, my knowledge was lacking. Sure, I made two 3D games before this, so I “knew” what I needed. But, in reality, I really didn’t. Like, not even close.
My past experiences with 3D games were fairly simple. The “renderer” that I made for both games didn’t have lighting or shadows, and used a very simple mesh system. I didn’t support skeletal animation (or animation in general), audio was simple, physics was done by hand, and so on. And while that was fine for the most part, I wanted to improve my 3D abilities to make slightly less shitier games. I went in a specific direction, and I kept going. And kept going. And kept going. And I didn’t really stop until I had a bloated engine with a bunch of incomplete features, each emitting a plethora of problems that I would either ignore or apply a very quick and dirty fix to.
From the very beginning of development, I wanted to reduce the number of dependencies that I was going to use. I didn’t want the project to have 30 dependencies that have to be fetched, compiled, and tested. I wanted a “minimal” game engine, with just enough features. Well, if you go have a look at the readme of the engine, you’ll see that the list of dependencies is far from “minimal”. And it is true that I still hate projects with a bunch of dependencies that really had no reason to exist (I have horror stories, dude), every dependency that I included was meticulously thought of. I would spend days trying to decide whether I should add a library or not. And while I did lose some tufts of hair from excessively worrying about which physics engine to use, those “self-meetings” were very useful. I never regretted any of the libraries that I used. I enjoyed using each one. Now, there is a caveat to this. Since I’m a narcissist who cannot stand looking at other people’s petty code (sorry), I wrapped all the libraries that I used around my own API. That was mainly for two reasons, however. The first is that it was a safeguard, for if and when I wanted to change that library, I could easily do it without changing anything integral to the codebase. The second reason is for ease of use. Many libraries came with a very OOP-like code, which my engine did not adhere to. But wrapping the libraries also made it easier to use my types, deferring the conversions to the function body itself. This was especially useful with the physics engine, Jolt, which had its own vector, quaternion, matrix, and transform types. For many libraries, a feature that required 3 or 4 lines of code was wrapped around a single function call instead.
I’m saying all of this not to stroke my ever-growing ego, but because it matters for this conversation. The systems that I really liked, even after abandoning the engine, ended up being those for which I used third-party libraries. Physics with Jolt, audio with OpenAL, UI with RmlUI, skeletal animations with OZZ animations. All of them saved my ass from diving deep into a rabbit hole that I was fairly interested in, but would have extended the developing period even longer.
[!NOTE] I know this seems very out of place, but oh my god, I can’t tell you how much I fucking love OZZ animations. If you ever want to implement a 3D skeletal animation system that is fast, reliable, and packed with useful features, please, please, please use OZZ. It’s fucking amazing! I’ll get on my knees, dude, I don’t give a shit. Use it!
And so, because of that, I won’t spend much time talking about those systems I liked. Perhaps I can come back to them at some point, or you can look at the code if you’re interested. However, what I will talk about are the failrues and regrets I had along the way, what I would have done differently, and what I’m doing right now (I’m drinking coffee, thanks for asking, dude).
All About Resources
First of all, before going any further into the details of the systems, I need to mention my main goal with each system. Since I was planning to make a game engine that was specifically suited for my games, I wanted to have every system be oriented around that thought. And the games that I like to make (and play) are very “level-based”. Meaning, the player starts a level, traverses it, beats it, and then moves to the next level. So when I was thinking about the resource manager, I wanted to think about loading resources on a per-level basis. And the same goes for the renderer and the rest of the systems. Got it? Okay, cool. Let’s continue.
The resource manager was probably one of the very first systems I worked on after setting up a basic window, input polling, and an event system. I knew that I needed it very early on, so I started working on it immediately. However, like with every system that I made for this engine, the initial look of the resource manager changed a lot. Despite that, out of every system in this engine, the resource manager is probably the one I loathe the most.
Essentially, the resource manager had two parts: the resource group, and the NBR tool, which stands for Nikola Binary Resource. I have so much creativity that I don’t really know where to store it anymore. Nonetheless, let’s talk about the NBR tool first, because it’s the source of my main issues with this system.
Now, let’s say that you have a very nice 3D model of a carrot that you want to import into the engine for whatever reason. Maybe you’re making a vegetable rogue-like. Who knows what you’re up to, you weirdo. Anyway, let’s say the carrot 3D model is in the GLTF format. But the engine itself doesn’t actually understand what a “GLTF” is. It only understands resources of the proprietary .nbr format. That format is designed specifically for the engine. No other engine knows it even exists. So you’ll have to convert the GLTF format into this NBR format, after which you can freely use it in the engine. And that’s where the NBR tool comes in. That’s right, the NBR tool is a command-line tool that you’ll have to use in order to convert any intermediary format into the NBR format. Why wasn’t the conversion built into the engine itself, and why was it deferred to an external tool? Well, I’m sure I had reasons for it.
As I can remember, I wanted to divorce the engine from the external tools to help reduce the number of dependencies. Unless I wanted to write converters for audio formats, model formats, image formats, and so on (and I was not keen to do so), I had to use third-party libraries to help with that conversion. So, if you were making a game with Nikola, you just need to install the CLI tool once on your system, and never have to worry about fetching and building any third-party libraries that are concerned with the conversion. And while I do admit that it’s a valiant reason, it ended up being a detriment rather than a help.
Anytime that I wanted to make a game, the part I dreaded the most was the resource management. It was a huge burden on the development of any game with this engine. Not to mention how unfriendly it is for non-programmers. I don’t know how I really expected an artist or a musician to use a CLI tool anytime they wanted to import an asset.
That decision came partly from a philosophy that I had about game engines, which states that the game and the engine should be one and the same. If, say, I was working with Unity, I would make the game inside the engine, and then I’d export the game as its own executable. With my engines, however, I do it more old-school. The engine is just a library that the game uses. You’re in the game’s executable the whole time when you’re developing the game. I have an article about it somewhere. You can go read it if you care.
Either way, the tool was actually pretty cool, if I do say so myself. You could convert one resource at a time if you wanted. But that’s a hassle if you want to convert a bunch of resources at once. And so, instead of listing each resource as flags, you can write it in a file instead. That’s right, I made a TOML-like programming language specifically for the NBR tool, which I called NBRList. Seriously, dude, the creativity is, like, choking me.
The programming language had resource sections, which specify the type of resource like TEXTURE, FONT, or ANIMATION. Then, you can list each resource under that section declaration, or you can just refer to a specific directory where all your important resources live.
::TEXTURE # All textures will live here
textures/player/cool_shades.png # Direct path
$textures/enemy # Create a local variable for any subsequent paths to prepend to
uncool_enemy.png # Full path = textures/enemy/uncool_enemy.png
very_cool_boss.png
textures/ # Or the tool can just iterate through the whole directory instead
Either way, the tool uses a multi-threaded job system that will submit a “conversion” job to a worker thread, which then gets converted into the NBR format, spitting out a .nbr file. The engineering is impressive, but it could have been way simpler. My god, way simpler.
Now that you’re done with converting all the required resources, you can head to the engine to use it. And, since I was focused on level-based games, I wanted a resource manager that reflected that. So, with the help of my trusty friend, the Game Engine Architecture book, I cooked up a cool system for such a feature.
In order to load a resource, you had to create a “resource group”. When you create a resource group, you’ll be given a unique ID that you can use to refer to that group later. A resource group will hold every resource that you’ll load into it. When you want to refer to a resource, the engine will retrieve the group ID associated with that resource and pull itself from there. The group ID was just a simple unsigned short. Yeah, not much for safety. You can push a bunch of resources into a group, and then unload it later once you’re done with it. Again, the basic idea here is that a “group” basically represented a “level”. Or, at least, it contained all the resources that the level needed.
Now, as for the resource itself, it was a bit different.
When you push a resource into a group, you’ll get back an ID, very similar to the resource group creation. However, the ID of the resource was only slightly different. It had a unique ID to identify the resource, of course, but it also saved the group ID that the user used to load the resource. It also contained a simple ResourceType enumeration that can be used to identify the type of that resource for retrieval later. Again, the resource ID was wrapped in a struct to conveniently hide both its ID and the group’s ID. And so, if you wanted to retrieve a texture, for example, you’d save its ID somewhere (in a Level struct, maybe), call one of the many resources_get_* functions for that specific resource type (in this case resources_get_texture), and the function will refer to the resource’s internal group ID, using it to retrieve the resource itself. Confused yet? Yeah, me, too. Oh, also, did I mention that every resource was saved to a specific resource file? Yes, that’s right. Instead of making a resource blob specific to a level, each resource had to be referred to separately, and it sucked if you had a bunch of resources, because now you’ll have a directory with a bunch of .nbr files just hanging around.
Safe to say that the resource system sucked. My main problem with it, though, was the very hard-to-use API. Whether that would be the NBR tool, the need to push each resource manually (or you could just push a whole directory, but you had to do it explicitly in code), or the lack of safety of asynchronous resource loading, which would have really increased loading times, the resource manager was just a mess. And to be honest, those failures came from a lack of knowledge on my part. Had I researched the topic more and seen how other engines did it, I would have perhaps made a better resource manager. Alas, though, that is not the case.
I’m still very glad that I attempted it, at the very least. I surely learned a lot from the mistakes I made here. Not to mention the newfound hatred for many intermediary formats that I got to discover along the way. Yes, FBX, I’m looking at you…
Although if I were to do this again, I would probably make the conversion an integral part of the engine without relying on an external CLI tool. That way, any non-programmers who are working on a game made with this engine don’t have to worry about using a damn terminal. And, if I were building for a distribution build of the game, I would probably just disable any dependencies that did the resource conversions. The build time would be faster, and the executable wouldn’t be so bloated.
I would have also stuck all resources that were associated with a single group into a binary blob like WAD, for example, referring to each resource with an index, or just loading all the resources of that blob into memory. I would have also tried to use some kind of ZIP library to compress those blobs, because they tend to get very huge. You wouldn’t have to deal with multiple files, infesting your game directory, but it also eliminates the need to push each resource explicitly in the code.
I would also not use NBR as a name. That was stupid.
By the way, I have an article talking about the resource manager here. If you’re interested (why?), you can dive deeper.
The Bell Of The Ball
If there was one system that I was severely unequipped to make, it was probably the renderer. By a long shot.
For me, the renderer is probably the most interesting, fun, and most important system of any engine. It will define the way your engine looks. And to a medium that is very visual, that really matters. And, because of the complexity associated with renderers and the importance I personally put on renderers, I was very anxious to tackle it. But I did. By god, I did.
Firstly, a word from our sponsor. No, I’m kidding lol.
My experience with making 3D renderers at that point in my journey was, well, lacking, to say the least. I went through the famous LearnOpenGL, and got my OpenGL fill. I had made very minimal and shit 3D renderers before. But they were just that: shit and minimal. Mainly, shit, though. And so, I wanted to step up my game (no pun intended) with this renderer. I wanted to tackle all the different graphics techniques. Shadows, lighting, particles, decals, render passes, global illumination, and… wait a minute. I’m making a minimal 3D engine, right? Well, yeah. I kind of slipped.
By the end of my journey with this engine, the renderer used a very shitty PBR shading model, basic shadow maps for directional and spot lights only, 3D particles, HDR, a not-so-bad render pass system, it used bindless textures, and it heavily abused the newer multi-indirect OpenGL features. Oh, and it also supported 3D skeletal animations, but that’s another system entirely that just used the renderer. So, what the fuck happened? I thought you wanted to make a simpler engine. Well, excitement, my friend. That’s what happened.
Initially, the engine had a Blinn-Phong shading model, with no shadows whatsoever. But, since I’m a sadist and I like pretty things, I decided that going the PBR route isn’t that hard. Right? Well, actually, PBR wasn’t that hard to implement. It was just hard to get right. Not to mention that it was extremely difficult to manage the performance issues that popped up. I noticed a very sharp downward spiral of frames once I implemented PBR. It was likely because my implementation of it was fairly immature and inconsiderate of any performance concerns. At the same time, however, PBR is the golden standard nowadays, and every 3D model or animation that I found online was using PBR materials. However, there is a very clear similarity between PBR and Blinn-Phong materials, so it would not have been a huge issue.
But I was blindsided. I loved how it looked, and I was interested in the tech. I had a copy of the Real-time rendering book sitting on my shelf, teasing me to read it. And, as the Lord is my witness, I succumbed to my rendering lustful ways.
While I’m not advocating for every renderer to return to the Blinn-Phong shading model, it is still the simplest shading model to implement compared to PBR. It would have been more than enough for a while. I should have probably stuck to it, as I got other parts of the engine together, and focused my attention on implementing other graphics techniques that are very important, like shadow mapping. That would have also given me enough time to focus on overlooked areas in the renderer, like its API, integration with the ECS system, and its overall performance, or lack thereof.
I actually did not write an article about the renderer, because I was very embarrassed about it (and still am), so I’ll briefly go over how it worked.
Essentially, the renderer was a collection of render passes, which themselves were comprised of a framebuffer, a few render targets, and then a render queue. A render queue was just a collection of render commands, and then a few GPU buffers that contained data like the bindless texture IDs, GPU render commands, and the like. If you wanted to render a mesh, for instance, you’d pass your mesh resource into the renderer_queue_mesh function, taking in other arguments such as the transform and material. The renderer would then add that to the relevant queue for later processing. Now it is a requirement that this function and any similar renderer_queue_* functions will have to be called after renderer_begin and before renderer_end.
The renderer_begin function takes in a FrameData struct, which includes all the useful goodies of a frame. A camera, skybox (because why not?), an ambient color value (I was too scared of global illumination), a directional light, and then two arrays, one for the point lights and the other for the spot lights. I’ll talk more in detail about this mess in the next section. But you had to have a FrameData lying somewhere, alive throughout the whole level/scene. Either way, this function only prepared the camera’s matrices to be sent to the main uniform buffer that all shaders would be sharing, and then cleared all queues from the previous frame. Then, after calling a bunch of renderer_queue_* functions, the application reaches the renderer_end function, which does several things.
Firstly, since the renderer was using multi-indirect draw calls for everything, I went through all active render queues and updated all their internal buffers. All the materials, vertices, indices, animations, and commands will be updated within this loop for each queue. Now, you might be asking why there are multiple queues? Well, I separated queues into three types: QUEUE_OPAQUE for opaque or non-transparent objects, QUEUE_PARTICLES for, well, particles, and QUEUE_DEBUG for debug objects, which were usually transparent. This system had a bit of good in it, but it was ultimately slow, inefficient, and a huge mess to deal with.
After that, however, the function initiates the render pass chain, which was just a doubly-linked list of render passes. As I said, render passes had framebuffers, render targets, and a render queue associated with them. Besides that, however, passes also had their own special shader that will be used to render each object (so that a shadow pass can render an object differently than a regular light pass, which is probably going to use a BRDF shader or something like that), a frame size, and then four callbacks. For each pass, you’d have to pass in (get it?) a callback to be initiated on creation of the pass, a destruction callback to be called upon the pass’s destruction, an optional preparation callback that will be initiated just before rendering, and then a submission callback that will probably go through the queue and render everything out, using the pass’s shader to do so.
All I can say is that, while the system had some good in it, it lacked any kind of solid design. Again, I made this system when I lacked knowledge of anything 3D rendering. In fact, the renderer looked vastly different initially than it did eventually. I didn’t even know anything about “render passes”. And when I did implement them, I had to break the previous renderer and insert this new thing I learned. Eventually, the renderer became a mess of old and new assumptions, molded together by tape and glue, barely able to work. It became hard to work with, and it became even harder to try to fix. I didn’t even talk about the performance of the renderer, since it was very abysmal.
Once again, I do not regret venturing deep into the rendering pipeline (get it? Whatever, it’s a cool joke). I’ve learned a ton from doing PBR and rendering in general. I plan to carry those lessons into my next 3D projects. It was the very first serious attempt at making a renderer, so, given that, I think I did okay. Not good, and certainly not the best. But just about okay.
The Entities And The Scenes
The failures of the previous systems came from incorrect and incomplete assumptions made by me, fueled by a lack of knowledge and, perhaps, a bit of overconfidence. And, to some extent, that is a natural occurrence, especially if you’re still new to the matter like me. However, when it comes to the scene system, the problem stemmed from a heavy bout of stubbornness. The problems were also caused by my lack of knowledge, yes, but I also vehemently rejected the idea of a scene system within the engine for a long time. In fact, I didn’t include a scene system until the last 3 or 4 months of developing this engine. Not because it took me a long time to implement, no. But because I didn’t want to. But why?
Before starting on this engine, I outlined a few “philosophical” tenets, if you will. I thought that these tenets were the hills I’m ready to die on. I shall never disapprove of any of them! But unless you’ve done this sort of thing a thousand times before, and you have plenty of years of experience in the field, you cannot be so close-minded, since you really don’t know anything. Actually, I would even go further and say that, early on, you don’t know what you don’t know. Meaning, there are things that you are aware of that you don’t know how to do, and then there are things that are completely unknown to you, and you won’t know about them until you gain experience. And rather than exploring all possibilities, I decided that my way of thinking is superior, and therefore, I must do it this way and no other way. And one of those stupid tenets was that I should never make any kind of scene system. Why? Well, I believed that scenes were special on a game-by-game basis. I thought that, while there might be some similarities, the engine should not enforce any scene system upon the user.
And, to some extent, I still believe that. But I also believe that every engine differs. It largely depends on what you are trying to achieve with your engine. There isn’t really a right way. There’s just the best way that suits you. For me, however, there is a clear distinction between the entity system and the scene system.
Entities largely depend on your preference. Maybe you like game objects that inherit from one another. Maybe you prefer game objects with components instead. Perhaps you feel more comfortable with an entity component system. Maybe you hate everything and just want to go completely procedural.
Scenes, on the other hand, are a collection of entities, scripts, resources, and everything that represents a level, essentially. You can serialize and deserialize scenes either into text or binary formats. You can build a scene hierarchy, transitioning from one scene to another with ease. But, when it’s all said and done, scenes are not entities.
Now that I ranted about some minor semantics that nobody really gives a shit about, let’s talk a little more about me.
With every game I make, the scenes always change. There are some similarities, of course, but I’m still not sure where the line is drawn. And that’s why I never implemented a scene system in any engine I made. However, my entities, no matter the game or the programming language, always had big similarities.
As I was developing the engine, I wanted to make a few showcases to enthrall the public with the amazing and beautiful features of my engine that nobody had ever seen before. And so, I set up a few simple testbeds specially for that purpose. I wanted to set up a few scenes to either show off the particles, lighting, audio, or what have you. And, even though the feature itself worked fine, I fucking hated setting anything up. I had to keep track of transforms, resources, and other things. I had to remember to update and render. I had to write everything once, twice, and three times. It became exhausting. So much so that I just stopped showcasing anything, which was probably for the better.
Usually, if I wanted to implement any system in the engine that I was unsure of, I would make a simple game, implement it there, and then decide whether to add it to the engine proper or to just leave it to be a per-game thing. And so, as I was super high on my horse, deciding to tackle an adventurous twin-stick ARPG shooter inspired by Diablo, I thought it would be a good idea to use the EnTT library, which is an implementation of an entity component system in C++. And, as soon as I set up everything, I knew for a fact that I needed EnTT in the engine proper.
While an ECS is not the end-all, be-all of entity systems, it is the one that suited my needs the most. My code is written in a very procedural style, so I did not like any library that had classes that must be inherited from. An ECS is nothing more than just a huge struct filled with arrays that you can access with a single unique ID. It suited me very well. And so, before I could even write an extra line of code for the game, I immediately went back and added EnTT to the engine. And, man, I wish I had done that from the beginning. But instead, I had to abide by those stupid philosophies I made up, and decided that it was like the bible.
Not to mention that the renderer would have benefited a lot from an early addition of the ECS. Since I would have avoided the whole FrameData mess by simply passing an instance of entt::registry, iterating over the light components, mesh components, and animation components, and adding them to the render queue. I could have made lights into simple components instead of whatever they ended up being. Of course, there’s probably a better way to do that. I’m not saying this is a good way, but it’s certainly better. Much better.
From The Ashes
Overall, I do not regret working on this engine. Not even a little bit. I spent about a year and 2 months working on it, and, if I could rewind time, I would do it again.
This engine taught me a lot of lessons. Lessons I plan to carry with me in the future. I learned that there isn’t really a single “best” solution to any problem. I learned that architecture is probably the hardest problem facing an engine programmer, and not the plethora of maths. I also learned not to be so stubborn and close-minded on any topic. If I could speak to my past self, I would probably tell him to just become an accountant. It’s, like, way easier, my guy. But then I would tell him to stay open-minded and not be presumptuous about any engine topics. When you’re just starting out, learning everything and anything is the most important thing you can do. You heard that deferred rendering is terrible, but is forward any better? How would you know? Some are telling you that it’s good, and others are telling you it’s the devil. I even saw one dude on Reddit who said that he (or she, I’m not sure) lost some friends in the industry because of their disagreements about the faults of deferred rendering. Dude, it’s really not that serious. Compared to other things you can lose friends over, their opinion about deferred rendering is pretty low on the list. I don’t think it’s even on the list. Just try both. Who cares? Maybe you’d like one over the other. Maybe you’ll hate both. Just try it out, man.
However, if I had to think about specific regrets, I would definitely try to keep my scope small and remember why I was making an engine in the first place. Making a PBR renderer is cool, yes, but do I really need it for my games? After all, I did just want to make games inspired by the likes of Quake and Thief. Do I really need PBR to achieve that? Probably not. If I were to start from scratch today, where my goal was to make a 3D engine specifically for my games, I would probably take most of the code that I liked from that engine, use a Blinn-Phong shading model for my renderer, and think of the usability of the engine from a user point of view rather than get excited over a new graphics technique. And you know what? A voxel engine sounds pretty cool right about now…
So what am I doing now? Thanks for definitely asking, weird person on the internet who’s probably drinking coffee right now.
Well, I’m still making a game engine! I actually lied to you, dear weird person on the internet. I did not abandon this engine entirely. I took about 60% or 70% of the code and started to make a 2D game engine, instead. I called it Freya, and it uses plenty of systems I already established with Nikola. And that’s right, my goal of weird engine names never ended.
I realized that I had a pretty good base for generally making any graphics project. Window, input, events, UI (for both game and debug UI), audio, multi-threading, ECS, logger, math, filesystem, and even an OpenGL wrapper I wrote named gfx. While the 3D engine I was making was way below average, I realized that I could take what I built and make a very impressive 2D engine instead. And that’s exactly what I did. So far, besides the features from the 3D engine I talked about, the engine also has a post-processing system, 2D animations, a flexible 2D renderer, 2D particles, simple tilemaps, and even a noise procedural generation system. It’s all still a work in progress, but it’s going pretty well. I’ve been doing it for about 3 months, so it has a long way to go. If you’re interested, you can check it, but there’s no pressure. You can finish your coffee first… I’ll wait…
Thanks for reading, and have a good day/night.