[D] Use AI to turn low poly world into photorealistic scenarios
I wonder why we are still trying to mimic photorealistic world by counting every reflection, polygon, tracing every ray and so on. Shouldn’t it be done in such manner that AI is just doing the job basing on photos and low polygon input like here https://assetstore.unity.com/packages/3d/characters/animals/poly-art-forest-set-128568 Also all other games like Zelda BOTW, Team Fortress 2 or even Fortnite could be easily turned by AI into photorealistic env. Shouldn’t we start thinking about doing AI accelerators (like first 3dfx cards) for enriching low polygonic world’s generated easily by most commodity hardware? I guess even ray tracing could be made by ML. I believed that future belongs to generating world by AI not by tricky mathematic graphics algorithms. Especially that in future it is easier to go from such trained networks into environment where instead of heaving an output on display, the output would be “drawn” directly in human brain through neural-connectivity. Also AI is able to properly handle cases where object is moving fast or turning around.