• warning: Creating default object from empty value in /home/davide/public_html/v4.kazzuya.com/modules/mailhandler/mailhandler.module on line 155.
  • warning: Creating default object from empty value in /home/davide/public_html/v4.kazzuya.com/modules/mailhandler/mailhandler.module on line 838.

Moved to v5.kazzuya.com

Davide's picture

@%#^ China !!!

Davide's picture

I'm hoping to move to Blogger and Duddie tells me that he can't see my v5.kazzuya.com if its hosted on a Blogger server.
Today I found out that actually "all things blog" are banned in China. Blog is a bad word for China.. and I thought that blog was a stupid word (people speaking out: big deal.. how hard is it to speak out ?? (unless you live in China or something)).

I don't mean to be racist, but Chinese government really sucks. Censorship is generally bad, and at this level it is just evil.

Some say that China will take over the World.. I say that I have doubts on that. Because China as it is, can't possibly have more influence in the world without having its own people access the world without imposing limits on knowledge.

If Duddie or any other person living in China wants to access the new version of my site, he/she can either ask some kid to setup a proxy to get rid of the Great Naziwall of China, or I can try to find a way around it.. but, really, just ignore and boycott China. Like I would do (and to some extent, do) with Italy.

ole'

V5 getting closer

Davide's picture

The design is not there yet, but it works (nothing strange about it)


v5.kazzuya.com

V5 ?

Davide's picture

I've been thinking about changing this site once again.
The main reason is that I notice some things not working quite as good anymore. Mostly having problems with the site not remembering my login (cookies I guess ?). Also the "Recent comments" area isn't up to date for non-registered users.
I suspect that this also depends on cookies and also on Drupal cache (which I have disabled anyway).

I'm thinking about moving back to Blogger which in the meantime has matured. It's easy now to redirect a sub-domain to Blogger (for example v5.kazzuya.com..) and user comments are finally embeddable to the blog entry's page (finally !!!).. also users can comment with their own Google accounts.. including myself of course.

The main problem would be to move the blog entries from Drupal to Blogger.. but I gave up on that already. I'd simply freeze v4.kazzuya.com and put a big link on top to redirect to the newest blog instead.. like I did for v3.kazzuya.com

I'm not sure what to do with kazzuya.com though. Right now it automatically redirects to the latest blog, but perhaps I should instead convert it to a master page with a big link to the latest blog and a list of previous blogs..

blog blog blog !!!

Into Siggraph 2008

Davide's picture

I arrived Sunday.. very tired. Also because I was preparing some stuff for some meetings and I worked non-stop for a day and a half and then some more (wasn't really worth it to go for such a stretch).

For this trip I thought about getting a new laptop. I wanted Vista to run some DX 10 stuff.
I went to a shop in Tokyo at the last minute but I eventually gave up and installed Vista on my Vaio SZ70. It runs fine.. but it's the little things with the drivers that upset me (now almost everything works, but the camera doesn't work, and some other quirks).

To get the proper Vista upgrade I should call Sony in Japan.. ehhh !
The SZ has an NVidia card, but I realized that it doesn't really support DX 10, so I plan on reinstalling XP and selling it.
It's made its time.. but it's still very much a great laptop (dual core, 2GB RAM, relatively light, etc etc).

I'm planning on getting a Vaio Z.. it's lighter than the one I have and has a better graphics card. Actually to get the 256MB VRAM version one has to make a custom order. Same if one wants a 7200RPM disk instead of the standard 5400RPM.

..so many issues, so much waste of time after those things, when will I learn ? 8)

I don't think I'll be buying a Mac laptop though. Apple offers little choice and recently I've grown tired of Apple. MacBook Air didn't really fit my needs plus there is this constant "wait for the next model coming out maybe next month" kind of feeling.
I also use Windows for development and I find that OS X's interface is somewhat outdated.

One thing I can't stand is the global menu bar on top. It's terrible to have to look up to pick something from the menu. It gets silly if your monitor is a 24 inches like for my iMac. With a dual monitor setting, the global menu bar thing really becomes stupid and the usability level drops in the realm of serious frustration.

Vaio is a nightmare if you try to install the OS and the drivers by yourself, but I think I can live with it.. English Vista would be great, but I can survive with the Japanese version.

..about Siggraph.. I've been so tired and sleepy. I thought I got rid of jet-lag but it's actually all screwed up because of many overlapping things: I normally sleep at weird hours, but here I also have to wake up early.. and I really don't know exactly what's going on.

It's my second Siggraph here in Los Angeles (first being in 1997). Compared to last year in San Diego, the convention center here is kind of dark.
I think I see a lot less people. Los Angeles downtown also is basically bullshit. It's got the be the most fake downtown of all America. There are no residences, some skyscrapers but those are bank buildings. The rest is the usual mom&pop's shops that look like the electricity-based version of the Far West shops.

Without a car the only things I can do is take the shuttle from the hotel and take the shuttle back to the hotel.

I've never been to New York or Chicago. I've been to San Francisco.. some say that you can get around with public transportation in SF but that wasn't my feeling.

You can't really live in a big city without pervasive public transportation or a car.. Los Angeles (which is an area rather than a city), was made for cars.. by cars (?)

Another thing that hits me, as usual, is the public restrooms. I've tried to stay away form those at the convention center.. but it's impossible not to smell feces at peak times.
I find it pretty hard to smell other people excrement and to listen to everything that happens in the adjacent "stall". With a full picture of the other persons' shoes and lowered pants.

I guess it's all about culture, standards (not meant in a denigrating way).
If I came from the jungle, I'd probably think that public toilets in USA were luxurious.. but coming from Japan, if feel like I'm taking a shit in the jungle 8)
Of course this is just me, I've always been sensitive about restrooms..
Also there are pretty bad toilets in Japan too... it's never matter of white or black, but more of average white and black cases.

Overall this Siggraph is a bit depressing (they even skipped the Electronic Theater). A lot of talks seem boring and a lot of stuff is more rehashed than usual..

ummmmmmmmmm

The fundamental problem with Visual Studio (C++) 2008

Davide's picture

The find dialog is dog slow !!!
After some usage, "Find" or "Find In Files" becomes very slow. I tried leaving the find box docked in the IDE window, but every time I type CTRL+F the dialog box goes into some sort of refresh seizure and only comes back after 3-4 seconds (on a 4 cores machine with 3GB of RAM 8).. an eternity if you are trying to jump somewhere in the code.
I noticed the same problem on a colleague's machine. I tried installing the SP1 beta for VS2008, which supposedly fixes some slowness in the GUI, but it didn't fix that one for sure.
An alternative is using CTRL+D (which I never really used before) to go to a quick find box, however that doesn't allow to change parameters such as case sensitivity and search for whole words, nor to verify what the current setting is.

Visual Studio has really become a piece of crap. It's still my favorite IDE ...if coupled with Visual Assist X. But if you program in C/C++, then you are a second class citizen, as Visual Studio has really sold its soul to the web/NET/database development or whatever else usage do most bored programmers out there.

One example is the dialog for changing syntax coloring. There is no more "C++ comment code". That is hidden behind "XML Comment" or something similar along those lines !

Another very sad thing, which has been worsening with years since Visual C++ 6, is the context help.
Theoretically one could press F1 on a keyword or API call and get the context help on it. In practice one presses F1, waits several seconds for some large application to load (or even to access the net !) and then gets a completely unrelated doc.. if lucky (one could set some sort of filters, but those never work right somehow).
For Windows API calls I used to get the Windows CE docs a few versions ago. Now I just get garbage.
Works much better by going on the MSDN web site and do a web search there.
How can possibly Microsoft win the web search engine battle if it can't even put together a system to help find a few thousands API calls ?

One last rant, if I'm allowed (I think I am !): headers have become completely unreadable. Microsoft includes and STL stuff are so complicated that give practically no help.
They must be machine generated.. would be nice to see the source of those.
MS includes have all sort of odd decoration to specify whether parameters are in input or output or what kind of passing they use.. turning those headers into something that could win an Obfuscated C Code Contest (for ugliness).

Thaaaaaaaannnkksss !!!

Getting ready for Siggraph 2008

Davide's picture

I'll be in Los Angeles in two weeks for Siggraph 2008.
Because of that, I've been coding a bit less and spending more time with Excel and Power Point, preparing for some meetings that I'll have there..
Using MS Office depresses me. I used to pride myself of not understanding what possible use for Office I would have.
Now, I think it's important to make Power Point presentations for all those people that aren't talking with you daily and that can't or won't understand code. One must learn to communicate and perhaps to inject some hype, too (MegaTexture, Geospatial, real-time Ray Tracing ;) ..but when time is scarce, then it's depressing to think that I could be effectively writing code and solving interesting problems rather than picking a nice font or resizing a picture in a corner of a presentation slide.
This reminds me a Dilbert strip that to me is now more real than ever: at an external site there is Wally's head coming out through a hole in table, inside a glass bell. Wally's face looks tired and unshaved. A spectator is looking at it while Dilbert explains: "this is what our 3D product would look like if we didn't have to waste time preparing demos". ...my feeling exactly 8)

Anyhow, I did write some code too. For one thing I'm at a good point with this "geometry processing framework". Basically a 3D engine tailored for geometry manipulation rather than rendering.
I once wrote something that was mixing both geometry processing and rendering, but it was a bit of a pain to maintain.
Personally, I think that it's more difficult and more important to put down those kind of frameworks rather than getting Direct3D/OpenGL to spin an object. Making API calls to a spoon-fed rendering interface is not the same as manipulating, optimizing, organizing data.

On the texture side, I've started writing some Power Point (indeed) and the goal there is to push the system rather than the format. The idea is that a JPEG-like format is being built but it's important to see how the eventual engine behind will handle it best, unpacking at selected texture resolutions on the fly depending on the VRAM and bandwidth budget.

Very important is also the build process. I've been asking artists not to optimize textures. Current assets are good enough to make high quality pre-rendered movies.. one character uses about 1GB of texture memory. However, many times these textures include an alpha channel that is not used and that can safely be removed. Bump maps, which only need one value can also be converted from RGB to grayscale (can't do it for color textures because in DX10 there are red-channel-only textures instead of luminosity textures (bha !)).
Anyway, the idea really is to make things as scalable as possible.. this is exciting because it gives authoring freedom.
One could ask an artist to make a model at a specific resolution or even to make one highres and then manually convert it to lower res with normal maps.. but that's all time wasted doing manual labor. Manual work is what should be avoided.. artists shouldn't worry about simplifying geometry unless they want to.

Time to sleep.. the Melatonin is finally making it's effect 8)
zzzzzzzzzzzzzzzzzzzzzzzz

Doing the MegaTexture thing

Davide's picture

I've finally setup a nice decompression thread that takes care of everything (almost) without any hiccups from the rendering side.

Right now I'm only decompressing textures in full size.
When a texture it's loaded, the average color (or 1x1 mip-map) is loaded right away (the 1x1 mip is stored in the file header without any need for decompression). Then a task for decompressing the full size texture is created and the decompression starts right away.
Right now it's either 1x1 or full-size, next I'll have to pick actual intermediate resolutions depending on the image being rendered.

One nice thing that I found out is that DX10 is thread-safe by default (no need to set special flags, making one fear for degraded performance). So, the actual texture creation and decompression is all happening in a thread separate from the main loop.
As I mentioned before, a texture decompressor will unpack data in slices using OpenMP. This allows me to use whatever cores I have to decompress one texture at the time as fast as possible (more cache coherent than trying to decompress different textures at once).
Currently, during load I can see all 4 cores being used to the maximum (more or less).. it's a nice feeling 8)

To minimize initialization times, I've also implemented a simple Direct3D texture object cache.
Every time a texture is released by the engine, it doesn't actually get immediately released in Direct3D but rather added to a free list (unlike the Wikipedia article, I'm actually not using linked-lists) where it can be picked up again if there is another request for a texture with the same characteristics.. otherwise it will be released from D3D after a few frames being unused.
Cache aside, I'm mostly counting on the multi-threading to cover up for all those potential stalls incurring from resource management in D3D.

The next big step will be to continuously "resize" textures depending on the needs of the frame being rendered.
This should happen lazily to avoid too much work by the decompression system.
The reason why one would want to resize textures is to economize on memory, but that doesn't have to happen at every frame for every texture. It makes more sense to keep a texture at 1024x1024 even if the next frame only needs a 512x512, rather than putting to work the decompression. The 1024x0124 may be needed soon again and the rasterizer is going to pick the right mips regardless of the maximum resolution.

In general I like the idea of progressive quality. I like the idea of drawing geometry with whatever texture resolution I have. This way I can theoretically keep a consistent frame rate and the quality of the textures will change depending on how fast the decompression can happen (which depends on the CPU/GPGPU power).

Next next I'll have to worry about geometry.. that's trickier especially when using a lot of complex materials.. but hopefully some coworkers can help there ;)

wooooo
zzzzzzzzzzz

Productivity, generality and OpenMP

Davide's picture

Flash news.. I'm very busy at work 8)

I could work less, but I want to produce something good. I like the idea to take a more general approach to problems and make something bigger out of it.

One of my current goals is to develop in a scalable manner. In order to do that, things need to be rethought in a more generic form.
For example one can have a triangular mesh, or could develop a system to do a remeshing to turn the original geometry into a semi-regular data structure that can easily be compressed and streamed progressively.

I think that scalability is really a key to the much needed productivity improvement in game development.

At work we talk every day about how to go about some solution, and there the key question is always: "can we use a scalable and generic solution ?".
This is usually about development pipeline.. not about actual code. The idea of code reusability is less straightforward. I actually aim more at providing simple implementations, to modularize code so that it can easily be grabbed without too many dependencies.. rather than trying to fit all in a supposed grand scheme of hierarchy of objects and whatnot.

In the end the harder problems are really those about how to organize data and how to transform those data across the development pipeline.

On the side, I also used OpenMP for the first time. After a few odd results, I managed to parallelize a loop that uncompresses images in that progressive-JPEG-like format that I've been working on.
Like for JPEG, the image is processed by 8x8 sub-blocks. Using OpenMP pragmas I set the parallel section to happen on rows of blocks.
Parallelizing every row of blocks makes sense, but I could probably try to do multiple rows at once to see if I can reduce overhead of context switching and potential cache trashing. Parellelizing every block instead turned out to be overkill.
As a rule of thumb, if I think that I could wrap some code into a function with practically no overhead, then perhaps I can make parallel section out of it. In fact, I think that OpenMP eventually grabs that section and makes a function out of it anyway...

Aside from some early decoding artifacts due to my inability to share some variables from outside the "parallel for" (see the example at the bottom here), using OpenMP was really easy. Definitely much simpler than manually creating and reusing threads, also less involved than using Threading Building Blocks because one doesn't need to create functor objects and also because OpenMP is readily available with modern compilers with minimal effort.

cool
zzzzzzzzzz

About Id's Megatextures and modularity

Davide's picture

I found a paper on Intel's site. It's from someone at Id Software.
I think it's pretty close to what's behind the cool MegaTexture name.

..basically a very optimized progressive-JPEG-like (edit: actually streams but it's not progressive in the "progressive JPEG" sense) streaming and decompression in real-time. Something very close to what I'm doing recently (as a task in a project, not as a full blown research).

For my implementation I decided to put an extra effort and make it easy to use outside the main application. This means that I'm going hide all my "nice" support classes and types and only expose the bare minimum for anyone to use the proposed functionality.

I set the goal of making a DLL out of this progressive CODEC, but more DLLs for the future.
The reason for a DLL rather than a LIB is that static linking can be pretty tricky. For example I have a global new and delete overload because I normally need 16/64 bytes aligned memory.
It's nice to be able to use a lot of my common library, but it wouldn't be nice to force headers and symbols onto other potential users.

Thinking in a DLL way also makes it easy to write at least one clean class that is easy to understand and document. The class exported in the DLL header is a bare-bones interface with a pointer to an actual hidden implementation class (aka PIMPL).

...basically I'm talking about modularity ! ..back from the dead.. saving the day where the OOP abuse complicates APIs.

cool

Syndicate content