Despite several attempts at standardization over the years, there are now almost as many real-time 3D rendering APIs as there are mainstream operating systems. For mobile platforms, nearly doubly so.
For our use cases, where we have mobile apps that could benefit from a few 3D elements and animations intermingled with the regular 2D UI elements, we would need a renderer that is lightweight, yet similar enough across platforms that we could deploy the same type of 3D elements in both Android, iOS, and web apps. And maybe even in native desktop apps.
Integrating a full-fledged 3D game engine, such as Unity, into these apps would likely be overkill. Using low-level APIs, such as Vulkan and Metal, would require a significant effort, while the high-level APIs, such as SceneKit and Filament, vary in their availability and feature parity across platforms.
The question is: where is the sweet spot? Given our use cases, what APIs should we use? Maybe it is more important that the same 3D content production pipeline can be used, even if the rendering APIs differ per platform? What rendering techniques would get us the type of results we are looking for?
For this thesis project you will, given a set of use cases, evaluate the suitability of various 3D rendering APIs, algorithms, data structures, animation techniques, GPU feature sets, and 3D content pipelines for use in the construction of regular iOS, Android, and web apps, and in some capacity, even native desktop apps.
Hopefully, in the end, the evaluation would lead to a proposal for how to approach 3D rendering in our apps, with some prototype multi-platform code demonstrating the feasibility of the approach.