VR or aren’t we?
The history of Virtual Reality (VR) goes back quite a long way, and it has been a rollercoaster ride so far. Dating to the 1960s, it is clear that its development was heavily funded by the U.S. military and space program in its formative years. The original concept for a stereoscopic device, which is essentially what VR goggles are, goes even further back, to the 19th century. So it’s safe to say that people’s imaginations have been captivated for over a century by the idea of an apparatus that is a portal to another world, even if it’s an artificial one. Since the success of the Oculus Rift in 2012, there has been a surge of excitement and expectation from big firms to make VR gaming a mainstream phenomenon, yet, for some reason, it has eluded its predestined fate. Against popular opinion, I believe it will never be a long-lasting commercial success, and let me explain why I believe that.


Early video games had a distinct barrier separating the virtual and real worlds. From graphical fidelity to the interfaces controlling games, there was a clear gap between the two. Games were abstract representations of their real-life counterparts. As technology evolved, this gap narrowed. The primary contact point for gaming has always been the interface. Traditional gaming relies on analog and digital inputs, housed in a handheld controller. It’s passive gaming: you can play sitting, lying, or standing, requiring only finger movements to manipulate the game. There were attempts to expand this, notably from Nintendo, with devices like the Power Glove for the Nintendo Entertainment System and the Wii’s Nunchuck, to utilize a fuller range of arm and hand motion. Think about holding a stick in your hand: with multiple joints (shoulder, elbow, wrist), you can move it in six directions—up/down, left/right, forward/backward—and three rotations: pitch, roll, and yaw. The former was a failure due to its very limited capabilities, while the latter was a commercial success, but still couldn’t be considered true six degrees of freedom (6DOF) hardware. The first PC interface with 6DOF was the Razer Hydra in 2011.



With the arrival of the Oculus Rift, the first successful VR hardware with two separate full-range-of-motion controllers, we entered the realm of active gameplay. It became possible to perfectly replicate every movement a player made in real time. Now, VR headsets with motion tracking and full-range controllers offer unprecedented input possibilities for gamers. As video game graphics have evolved into near-lifelike representations, the barrier separating artificial and real worlds has begun to fade away. As more VR headsets were released, they’ve become more affordable, with tons of games developed for these systems. But somehow, VR still can’t break through into the mainstream, despite companies like Meta spending astronomical sums—$100 billion and counting—to push it further. So what stops it from overtaking the gaming market?

Besides the obvious reasons: the still high initial cost, the lack of a large library of original and high-quality games, and the bulky and uncomfortable hardware, there’s one aspect no one talks about. I believe the answer lies in VR’s nature. VR is active gameplay, where a player’s movements are fully translated into in-game motion. While the immersion is close to a perfect simulation of life, key elements are missing. There’s no tactile feedback. When you hit a ball with a virtual tennis racket, there’s no sensation of contact or rebound, only visual cues and weak haptic buzzing to confirm the action. The player doesn’t feel the wind affecting the ball’s trajectory or the racket’s weight, even though the motions are the same as in a real-life tennis game. The virtual experience is inferior to the real activity, yet both require the same physical effort. It doesn’t matter if visuals reach indistinguishable-from-reality levels; VR still won’t be on par with reality.
I once read a 2007 interview with Japanese video game designer Tomonobu Itagaki, where he pointed out a crucial aspect of traditional gameplay:
“On a game console, what you see is an exponentially larger output from the game, as far as visuals, sounds, and everything. You might push a single button, and an incredible amount of activity will happen on the screen. Just from a purely objective viewpoint, having the rate of increase of output versus input—well, the higher it is, the better. The more you get out of what you do, the more human beings will be happy, and find that experience fun. […] you have a controller, which gives you very small inputs right on a physical level, but it’s creating a huge reaction on-screen, for what you’re seeing. This is very appealing for people to play.”
It perfectly summarizes what’s “wrong” with virtual gaming. Players must perform the same movements as in real-life actions. The input-output ratio is 1:1, but without all the sensory depth and real-world experience of the actual world. And I haven’t mentioned another VR pain point: the locomotion hurdle. Because players face space limitations in VR, they’re confined to a restricted area, so large distance in-game movements aren’t possible, except virtually through flawed workarounds. This often disrupts the inner equilibrium of many players, causing discomfort or disorientation.
It important to note that this opinion reflects the current state of VR. There are many use cases where these “flaws” don’t matter. VR enables experiences for people unavailable due to disabilities, location, or resources. In the near future, it may overcome its limitations with technological advancements, and until then, VR remains very relevant in education and professional applications. VR indeed allows players to enter worlds beyond reality and fulfill fantasies, and many would argue that the active-passive gameplay tradeoff is worth it for the immersion.
If I had to bet on a winner, I’d say that augmented reality (AR) or mixed reality (MR) will be more popular among users, because it combines real-world sensations with simulated elements—the best of both worlds. Imagine playing basketball with friends at the local park, with visual effects enhancing the game: flaming balls for a 3-pointer or a huge crowd of spectators around the court. With smart sunglasses, you can transform any mundane weekend session into an NBA Finals experience. Guess we’ll have to wait and see how the future unfolds for virtual reality applications.
Every day I’m filletin’…
Now that I had the basic shape, it was time to put it in 3D. This was usually Gyula’s task, but since he could no longer participate in the project, there was no one to create the CAD drawings. That could have been the end of it, but I thought, if no one else would do it, I would do it myself. I’ve been working in 3D for years, mostly with polygon-based design and some NURBS in Rhino, but parametric design was completely new territory. Watching Gyula shape the earlier prototype, it seemed hopelessly complicated. However, with no other option, I jumped right in. As it turned out, once I grasped the basics, it was much easier than I expected. Having learned many programs over the years as a graphic designer likely helped me navigate the initial learning curve.
There are two main approaches to designing almost anything: top-down and bottom-up methodologies. They’re fairly self-explanatory, but you can look up their definitions for more detail. As a beginner, I believe most people with a rational approach would choose the bottom-up method, as it allows more freedom for iterations and error corrections. I chose the former. This method would be helpful when designing the parts inside the housing, but in any case, they wouldn’t fit, I’d have to go back to the start to alter the controller’s body. Once I had the rough shape, I performed a basic check to ensure the six DC motors would fit, but that was my only precaution before diving into the design. Having the previous controller version to measure was incredibly helpful, as I knew what worked size-wise.


I took photos of the plasticine model from all sides: front, top, and side views. I formed the outlines with curves and extruded them along their respective axes. By intersecting these shapes, I removed anything out of bounds, resulting in a CNC-machined-looking block. It took longer than it should have, as I was learning the software as I went. I’m grateful for the variable fillet width feature, which greatly helped me create an organic, ergonomic, and aesthetically pleasing shape.
Once I achieved the desired 3D housing, the real engineering began. First, I converted it into a hollow body for assembly, ensuring a wall thickness suitable for 3D printing—neither too thick nor too fragile. Then, I split it into two parts. The simplest approach was to define a parting line along the horizontal midplane, ensuring it didn’t interfere with the overall shape. This was easier said than done. If you’ve ever disassembled an electronic device, you’ve likely noticed a lip and groove along the edges. These features help the parts fit snugly despite any imperfections. Since I’m designing for 3D printing, I don’t need to account for all the critical engineering factors, such as those required for injection molding. Finally, I had the complete housing parts that can be fitted together with ease.



Stay tuned…