For video game developers, a common decision during development is determining which platforms your game will support.1 One reason this is important is because the more platforms you choose, the more work is needed to accommodate all the various input devices across those platforms. For instance, if you plan on selling your game on both PlayStation 4 and XBOX One, you need to tell the game’s engine to recognize inputs from both a PlayStation 4 and an XBOX One controller. Each additional platform adds complexity to the game’s backend, which adds both time and money to the process.
Adding one or two extra platforms isn’t really the problem; the large number of platforms now available is what’s made this development phase problematic. When Unity first launched back in 2005, not much emphasis was placed on streamlining the job of accommodating multiple input devices. Several years later, the number of platforms has expanded to include the likes of phones, tablets, and various virtual reality systems. On top of the implementation of cross-platform play, which further incentivizes developers to offer wider platform support, this phase of development was looking like it needed an overhaul.
Although Unity did originally offer considerable granularity when it came to its input system, it was not necessarily designed with simplicity in mind. Now, Unity has decided to create a new input system from scratch in order to lessen the burden of enabling multi-platform support. The new system is currently in beta, supports an unlimited number of devices, and promises to be more extensible and customizable compared to the traditional input system.
For instance, they’ve changed the way actions are interpreted. They’ve taken it from a lower level understanding (hitting the space bar or the A button) to a higher level (“jump,” “interact”). This makes the processing logic broader, and makes for a more flexible and intuitive binding system. For example, let’s say you’re setting up the “move” action in a 2D space. After choosing a vector (in this case, vector2) and picking a device (say, an XBOX One controller), it will now automatically present you with the relevant input options like the D-pad and left/right joysticks, as these are the only choices applicable to vector2 for that device.
They also allow for what’s known as vector composites, which allows one binding to source its values from any number of part bindings. For example, if you wanted to use a keyboard for 2D movement, you could use the composite values W, A, S, and D. Additionally, they changed the way Unity identifies buttons across similar devices in order to universalize the task of mapping controls. For instance, in the images below you’ll notice that both controllers have an X button, but it’s in a different location. Unity’s new input system ignores the button’s given designation entirely and instead uses cardinal directions (north, south, east, and west) to refer to them in the new system.
These are small examples, but it’s indicative of the overall strategy to streamline the inherent complexity of supporting a multitude of devices and improving the development process as a whole.
It’s nice when the software side steps up to improve your workflow, but in order to maximize your productivity, you can’t rely only on your software to do it. After all, apps can only run as well as the hardware allows. BOXX’s mission is to maximize the workflows for creative professionals, and we do that by offering the best workstations purpose-built for software like Unity. As a program that includes both single-threaded and multi-threaded tasks, you need a computer like the APEXX X3, which provides high core counts on top of high frequencies. Check out our website or consult with a BOXX performance specialist to find out more.
1 PC, PlayStation 4, XBOX One, Nintendo Switch, etc.