Mm is excited to be moving towers soon, to a new shiny office inside a new shiny tower… and in the new tower, for reasons too pedestrian to recount here, the boring standard office ceiling tiles are being replaced with clear acrylic ones. This, of course, is awesome. But in what particular way?
Over a few cocktails and glasses of champagne at Guildford’s entirely un-legendary and slightly creepy The-Shining-esque hotel-bar-blue-light-travelling-salesman-horror-venue ‘The Mandolay’, the Molecules put their heads together. ‘What’, slurred Paul ‘Aggie’ Davis, ‘about adding coloured lights above the tiles?’
What indeed! I feel a lab project coming on…
A little drunken 3 am googling session later, whilst contemplating how one might wire up and control 2000 ceiling tiles with coloured lights, I discovered this rather wonderful chip: the Allegro A6281.
Are all my dreams answered?
Alex Evans thinks about lots of awesome techie things, always reading, always learning, always geeking out and making weird and awesome things.
The other day he sent me the text that I now present to you, a collection of thoughts on recent developments in computer graphical hoo-hah. I don’t really understand half of it, being just a lowly writer, but he was insistent that we publish it at once! So here it is, Alex’s mini tech blog, enjoy!
Arnold render is back! The renderer by Marcos Fajardo, that made those super cute super soft renders back in the 90s and popularised the ‘ambient occlusion’- is back - look here for a blast from the past! It seems that Sony Pictures Imageworks used a new version of Arnold on Monster House and also on Cloudy With a Chance of Meatballs. For graphics geeks, (like Alex) that’s pretty exciting.
After the keynotes at this years HPG (High Performance Graphics) Conference everyone was talking about how ray tracing was the ‘new’ thing. At last! It’s only taken 30 years… And the first beta images of the new Arnold renderer look great
Meanwhile, back into games land – ever since Jon Olick from id (ex Naughty Dog, I believe) gave a talk at Siggraph 2008 about his research on voxels (remember Comanche? Ah, those were the days!) – anyway, yes, voxels on GPU, they seem to be the in thing. This video is quite cool.
More recently, Gigavoxels by Cyril Crassin is a beautiful implementation of a simple idea – volume mip maps stored in a sparse octree – that allows blurring and filtering of voxel scenes. That’s crucial – I hate the blocky look of most voxel renders, which is the equivalent of point sampling of textures – and not intrinsic to voxels at all. Given that my personal graphics manifesto is ‘blur and add noise’ (it solves all problems!) gigavoxels is right up my street. Awesome stuff.
GPU ray marching
Demo land: Iñigo Quílez has been hammering on signed distance fields and GPU ray marching – and has got it down to a fine art. His 4k demos and images (that’s 4k! Tiny! Amazing! Etc!) are second to none. The technique is super simple - but in his artistic hands he creates beautiful images. (shameless plug alert).
I’ve always loved signed distance fields – they mesh neatly with voxels, ray tracing, and blurring-and-adding-noise (see what I did there? :)) – and I gave a talk about it at Siggraph 2006 – “Fast Approximation for Global Illumination on Dynamic Scenes” - which includes a rough / unrefined version of the way that Inigo creates the soft shadows in his images.
That paper also constituted the first time Sackboy was ever seen outside the walls of Mm – and an explanation of how I handle all those little lights in levels - long before LBP was announced! However I put a sphere over his head so that nobody would recognise him. It seems to have worked…
Still with me? Anyone understand that? Good, then you’ll be looking forward to the second part then won’t you? An update from Mister Johnny Hopper about Siggraph 09 - coming soon!
3d scanning update! after a few more experiments, I uploaded some very ropey (and non-interactive) 3d scanner code to the david-lasercanning forum, along with some mugshots of Mm’s Dave Smith. And, bless him, a chap called Florian from The Internet posted a comment to my previous post here, with a running version of my code – complete with interactively rotatable 3d scan of Dave! OMG! I hope he’ll forgive me for reposting the link he put in the comment, in this post: you can get this file http://bezier.de/exchange/alex_3d_scanner_x02.zip and when you run it, you should see a 3d mugshot of Mm’s own dave that’s rotatable with mouse. If I had a little more skill with processing, I probably could have embedded the java applet version in this page, but, er, I don’t know how at this late point in the day :)
But how do you run it then, I hear you ask? it’s a port of my C++ code to processing, and processing runs on any platform with java. linux, mac, pc, etc. so get thee forthwith to http://www.processing.org, and download the right version for you. then, once you’ve sussed out the processing interface, load up Florian’s ‘.pde’ file, click ‘play’, and behold the awesome 3d scan-ness. All from 3 simple black and white photos… hurrah.
I’ve always had a passing interest in things like computer vision, 3d scanning and wot-not… but I don’t really know much about it. so one sunday a few weeks ago I spent a nice day sitting on the sofa, reading papers on 3d scanning. (I’m sad like that). I was looking for a ‘free-time project’ which I could play with in half-hour chunks – as a coder with lots to do, I don’t have the luxury of the immediacy of a sketchbook, nor do I have vast tracts of time to sit down and get stuck into a coding session on anything other than LBP. And, I have never got into processing for some reason…
Anyway, I stumbled upon a rather cool free laser scanning program, that lets you scan objects (that is, create 3d meshes in the computer that accurately represent a real object, for example, your face) using just a laser pointer, a computer and a webcam. It’s called the ‘david laserscanner’. Not posessing a laser pointer, a nosed around a bit more and decided that I might be able to program a structured light scanner. In this technique, you shine a pattern (or patterns) onto your object using a video projector, and photograph the results. by the same sort of triangulation that stereo-image-pair type techniques work (you know, red-green glasses and wot-not), you get a mesh out. only it’s more robust because one of the cameras is replaced by a projector and the pattern you project allows you to more clearly see the shape of the object. like this:
A week later I had a plan – see the david forum for the details!
Another week passed before I managed to snaffle rex’s HD video camera, and took those pictures of our very own Jonny Hopper (’J-Ho’). You can see that all is needed is a program that can measure the ‘bendyness’ of the stripes on his face.
This weekend I spent saturday afternoon trying out different ideas for this. I managed to compensate for the gamma curve of the projector/camera, which was step 1; then I was able to extract the ‘phase’ of the stripes, which was step 2. but then I hit a snag – my ‘phase unwrapping’ – which gives you your output depths, doesn’t work too well. never mind, I was trying a pretty dumb algorithm, but I’ve run out of time and it’ll have to be next week or so before I get a chance to try agian. still, massive noise aside, all of which comes from bugs in my unwrapper, the initial results are promising! the huge errors visible on the left are just due to errors in the phase-unwrap, and hopefully they can be completely got rid of. you can see quite a nice profile, where it works! yay!