I used StyleGAN2-ADA to generate the original portrait, and then applied some special effects in Unreal.
The previous AI exercises I used StyleGAN or StyleGAN2, which required a lot of data sets and training time.
This batch changed to StyleGAN2-ADA, which was later released by Nvidia.
It supports small data sets, that is really a gospel for amateur players.
In addition, in terms of framework, Google’s Tensorflow was used before, but Facebook’s Pytorch was replaced this time.
There is no special reason. Seeing that Pytorch is being used more and more, just try it.
Why do we need to calculate the scene depth of the photo?
Because the photo is two-dimensional, if you directly use the color or grayscale of the two-dimensional image to simulate the three-dimensional effect, it often does not match the real world scene:
Last year today, I made a Christmas tree using Bender and Python:
This time I tested another one:
Export the model (Mesh) generated by the Sverchok plug-in or Python script from Blender, and then import it into the web page for rendering.
When the Mac is connected to a high-resolution 2K monitor, for example, the DELL U2518D in my hand has a default resolution of 2560*1440, which is stuffed into a 25-inch screen. The text is actually a bit small.
This project was about robotic arm with multi screens, (by ManaVR ✖ INT++) made in 2017.
In the early stage, I used MaxMSP Jitter with ABB’s RobotStudio to simulate the robotic arm and the large screen.
This article only focuses on how to use MaxMSP to do the simulation of the project prototype, making full use of the very convenient TCP communication, multi-screen motion simulation and other functional modules in MaxMSP Jitter.
In short, you know, in my hands, MaxMSP is not just the MaxMSP🙃.
The key steps of the video above:
Most about three topic:
These days I’m learning machine learning and trying generating visual things based on StyleGAN, a neural network.
AILog005, was first sent to my wife, she said: “OK, more suppressed.”
Great! This is the feeling I’m looking for. Not only do I specifically mean “suppression”, but I finally have a way of expressing myself, and it is “obscure”.
Blender is now a new force in 3D art. Although new but not young, about twenties.
The first article in 2020, accidentally picked an Old School topic.
There was a section in 名探偵コナン 戦慄の楽譜フルスコア(Detective Conan:Full Score of Fear), released ten years ago. Konan was standing in the middle of the water. First, a world wave shot down the phone receiver on the shore, then closed his eyes and shouted loudly. 110 alarm calls were broadcast remotely.
This time I will talk about how to use sound to make a phone call.
And advanced content — using sound waves as a carrier to play interaction and decode the DTMF signals.
Konan not only demonstrated the final effect…