• The new B5TV.COM is here. We've replaced our 16 year old software with flashy new XenForo install. Registration is open again. Password resets will work again. More info here.

Babylon 5 and HD...

If there's one thing I've learned in this thread it's that while B5 might not have the MOST fans, it certainly has the smartest.
 
If there's one thing I've learned in this thread it's that while B5 might not have the MOST fans, it certainly has the smartest.
:eek: :eek: :eek: :eek: :eek: :eek: :eek: :LOL:


Haven't visited "Babble-on" yet, eh? And the DVD sales indicate the are more than a few of us out there!. :devil:

This site proves to me that though we might not be the brightest, we are the best. :beer: :beer: :beer:
 
If there's one thing I've learned in this thread it's that while B5 might not have the MOST fans, it certainly has the smartest.

Absolutely. I always make sure that my hair is combed, my shoes shined and my trousers neatly pressed before sitting down to enjoy an episode.

:D
 
Absolutely. I always make sure that my hair is combed, my shoes shined and my trousers neatly pressed before sitting down to enjoy an episode.

And after watching 6-8 hours of episodes, our hair is a mess, the trousers are wrinkled, the shoes are off and the feet are up on something and we complain because we want to watch more but we have to get on with our lives. :D :D
 
And after watching 6-8 hours of episodes, our hair is a mess, the trousers are wrinkled, the shoes are off and the feet are up on something...

Er, quick show of hands - who else finds this image disturbing and wonders if we're all talking about the same DVDs? :D

Regards,

Joe
 
If I didn’t know better, I’d think you were willfully trying to misunderstand me.

You do realize that, when speaking about NTSC, part of the reason Film looks different from video on TV is the motion characteristics of the interlacing, right? Yes, everything is broadcast and displayed interlaced (on NTSC), but the film is originally filmed non-interlaced, and the video, recorded with an NTSC video camera, IS interlaced. When both are displayed on an NTSC TV, they look different, and part of the reason is the interlacing.

Do you understand this fact?

Well, I understood what you are talking about. There is quite a nice demonstration of how the original aquisition of the images affects the qualities of the picture when it is displayed in video, on a laser disc called A Video Standard, that I have, and I'd bet that Joe had, at least at one time. :D

P.S., unless I have company, I watch TV in gym shorts and T shirt. I like to be comfortable! :p
 
As to the lost CGI files... wouldn't it be possible to scan in the existing shots, and devise a program to enhance them, to a higher definition, perhaps adding detail manually? Sort of use the existing shots as a template, and color over them? It just seems to me that there ought to be a way to do it more easily than creating everything over, from scratch! BTW, I still haven't bought the B5 DVDs, just borrowed them from a friend. I'm holding out for B5 HD!
 
As to the lost CGI files... wouldn't it be possible to scan in the existing shots, and devise a program to enhance them, to a higher definition,
You do have so called HD-Scalers. with the quality of the source information, it won't look particularly good though. As they say, garbage in, garbage out. (garbage here only referring to the video format quality of 480i)

perhaps adding detail manually? Sort of use the existing shots as a template, and color over them?
What you're describing would be hand rotoscoping to manually add detail, which is a terribly labour intensive job, and is really expensive. Your method would actually be much harder, and won't look particularly good.

It just seems to me that there ought to be a way to do it more easily than creating everything over, from scratch!
Recreating everything by hand wouldn't be that hard. Your method would require them to essentially redraw every frame by hand, which would be both really hard to do, and it would look off if you wouldn't be using a 3d file as base.

You see, that's the advantage of going with 3d, versus traditional animation. You only have to 'draw' (create a 3d model) once. After that it is just animating, which could be mapped to the original 2d frames we have now, although it is probably easier to eyeball the animation of the original scenes for animators, and roughly match those motions.

BTW, I still haven't bought the B5 DVDs, just borrowed them from a friend. I'm holding out for B5 HD!
You might have to wait for a while. And the DVDs aren't all bad, I quite like the look of the live action in its widescreen form on the DVDs. But they do cost quite a bit of money, so if you really want HD, waiting and see what happens with B5 HD might be wise, I guess.
 
Since I'm reading this thread again anyway, might as well go over this issue one last time. :p

If I didn’t know better, I’d think you were willfully trying to misunderstand me.

You do realize that, when speaking about NTSC, part of the reason Film looks different from video on TV is the motion characteristics of the interlacing, right? Yes, everything is broadcast and displayed interlaced (on NTSC), but the film is originally filmed non-interlaced, and the video, recorded with an NTSC video camera, IS interlaced. When both are displayed on an NTSC TV, they look different, and part of the reason is the interlacing.

Do you understand this fact?
What you're still doing here is applying the term 'interlacing' to wider issues than what the term strictly means. I think you might be talking about the motion properties of a 3:2 pulled down interlaced feed versus a proper 60i interlaced feed. Calling this a non-interlaced versus interlaced problem is a bit confusing.

It's a bit like saying watching the same movie on your television or in the cinema looks different because of the lack of scanlines in the cinema. It sort of misses the point, because while something might be a contributing factor in the example, there are much more obvious reasons why things look different, so bringing up the example only serves to confuse the issue.

If I may, I'll go over all the various steps that the 24 fps film footage has to go through to get something usable for NTSC television, so we can more accurately pinpoint what you're talking about. Because I think I know what you're talking about, but your exact terminology is using pretty general terms, where me and Joe might get what you're saying easier if you went into the specifics more.

Babylon 5 was shot on film. That is, on 35 milimeter wide celluloid coated with microscopic light sensitive grains. Each of the frames on this film was then exposed, at the rate of 24 per second. To get this physical medium into the eloctronic one of television, it had to be transfered to that format. This process is called telecine, which, really, really simplified, involves projecting the film image on either CRT or CCD cameras, cameras that produce a electronic or digital video signal. For B5, this would've been done at a resolution of 480 by 640 pixels, matching the frame size of the NTSC television standard it was broadcast in.

At this point, you would have a digital version of what was originally shot on film, in a format that could be called 480p24. However, there is more to the NTSC broadcast standard than the frame size. Notably, its framerate is 30 frames per second* instead of film's 24, and it builds up each frame by drawing it in two so called 'fields', effectively drawing 60 fields per second. Each frame is broken up into its odd scanlines for one field, and its even scanlines for the other field.

To get the "480p24" in a format suitable for NTSC broadcast, first each full 480 by 640 frame is broken up in two fields, one for the odd lines, one for the even lines. Normally, when footage that has been broken up into these odd/even fields is displayed, it is done by first drawing the odd lines of the first frame, let's say frame A, then the even lines of frame A, then the odd lines of frame B, then the even lines of frame B, etc. This is called interlacing. So, A field, A field, B field, B field, C field, C field, etc.

Getting the 24fps footage into 30fps involves a process of field repetition, called "3:2 pulldown" (Which is actually an archaic name, and 2:3 pulldown is a more accurate name for the process). It involves repeating the fields of the first frame 2 times, the next frame will however have its fields repeated three times, instead of the usual 2. So first the even lines of frame A, then the odd lines of frame A, then the even lines of frame B, then the odd lines of from B, then the even lines again of frame B, then the odd lines of frame C, then the even lines of frame C, then the odd lines of frame D, then the even lines of frame D, then the odd lines of frame D etc. So, A field, A field, B field, B field, B field, C field, C field, D field, D field, D field etc. Here's an image I yanked from Wikipedia, that illustrates the process:



So, 4 film frames get broken up into 8 field, which by field repetition get stretched out over 10 displayed fields, or 5 interlaced frames. 4 film frames being stretched out into 5 NTSC frames accounts for the frame rate difference of 24 to 30 fps (with 24/30 equaling 4/5).

However, the CGI footage, at least initially, was rendered at 60 interlaced fields. So for each 0.042 seconds (1/24) the film would shoot a frame, but the CGI would be done (if field rendering was enabled) for each 0.017 seconds (1/60). This is, I think, what b4bob is referring to. Furthermore, the film in its interlaced format has a very specific look, distinct from native 60i footage, because of the 3:2 pulldown. Which in in essence causes each 24fps frame to be displayed at alternatingly 0.033 seconds (2/60) and 0.050 seconds (3/60), versus the smoother look of natively rendered CGI at 60i.

*actually 23.976 frames per second. Rounded up to simplify explenation, but effectively, the 24fps footage is slowed down by 0.1% first.
 
Shabaz, I did not mean hand rotoscoping, I meant digitaly reworking frames, so that when you alter one frame to suit, the alterations can be transfered to other frames, as some forms of computer animation work. They sometimes scan in live action, or just sketches, and alter them to produce the desired look. I have some familiarity with animation techniques, and that is considered a great shortcut, over the old hand drawn methods. I don't know how much effort it takes to render CGI from scratch, and thought that method might be easier.

I really do think you are confusing what B4Bob is saying about the existing CGI. It's far simpler than you are making it. Even though the final product is SD 480i, when you start with a better source, such as 35mm film, the final video looks better than it would if the original was shot in 480i video. If the original CGI had been rendered in 480p, that would have been enough of an improvement to notice in the final transfer to 480i. It's that simple. Of course that improvement would be slight, compared to the difference in watching a HD version of something originally shot in 35mm, vs something shot in IMAX, but transfered to HD, even though the 35mm resolution is already well above that of 1080i, or 720p, HD.
 
Shabaz, I did not mean hand rotoscoping, I meant digitaly reworking frames, so that when you alter one frame to suit, the alterations can be transfered to other frames, as some forms of computer animation work. They sometimes scan in live action, or just sketches, and alter them to produce the desired look. I have some familiarity with animation techniques, and that is considered a great shortcut, over the old hand drawn methods. I don't know how much effort it takes to render CGI from scratch, and thought that method might be easier.
That still would be an incredibly labour intensive thing to do compared to going with the CGI route, I think. And even computer assisted hand adding detail would be called rotoscoping, far as I know.

I really do think you are confusing what B4Bob is saying about the existing CGI. It's far simpler than you are making it. Even though the final product is SD 480i, when you start with a better source, such as 35mm film, the final video looks better than it would if the original was shot in 480i video.
Obviously. But that's because of a lot of things that are different between film and something shot on 480i. Framerate, resolution, light sensitivity properties, etc. etc. What b4bob keeps hammering on about is that "the film is originally filmed non-interlaced, and the video, recorded with an NTSC video camera, IS interlaced" and that this interlaced versus non interlaced gives it a different look.

Now, there are a lot of things that look totally different between film and something not film, so I tried to isolate exactly what he might be talking about, when he says that it is a non-interlaced versus interlaced issue.

I think I managed to be reasonably thorough in describing all the steps the CGI or fim might go through. Do you think I missed anything?

If the original CGI had been rendered in 480p, that would have been enough of an improvement to notice in the final transfer to 480i.
Would it? Why?

If the interlaced CGI was rendered without field rendering enabled, you would not be able to tell the difference. If it was rendered with field rendering, the 480p version actually would look a bit less smooth than the 480i version in the final product, and maybe a bit closer to what the film looks like, though still not matching the film's 3:2 pulled down look.

And this isn't what b4bob suggested. He brought up a JMS quote that stated they later switched over to 24 fps rendering, which presumably then went through the pull down process and would have roughly the same motion look as the film. Now, this is technically a framerate issue, but I thought I'd bring in the 3:2 pulldown process, since it brings into play interlacing issues, and it might explain why b4bob is talking about it as an interlacing problem.
 
As to the lost CGI files... wouldn't it be possible to scan in the existing shots, and devise a program to enhance them, to a higher definition, perhaps adding detail manually?
I don't think that it's that easy.

What I think it was that they lost was the files that were inputs to the creation of the existing shots ..... basically detaild CAD-type files that were the blueprints of all of the ship classes and such.

If you had scenes rendered in stereo, so that you could see the parallax between the shots, then you could back out a 3-D model. You could conceivably use the motion of a couple frames of a sequence as a stereo pair, but to really do the calculations you would want to have all of the viewing geometry parameters that had been used to generate the frames. If they hadn't even succeeded in not losing the CAD files for the ships, then I find it highly unlikely that they still had the parameters used to generate each of the frames of their animations.

Backing out everything, both viewing geometry and "elevations" for each point, then extrapolating the full ship from what you could see (or piecing together the elevations generated from multiple view angles of the ship) ..... and let's not forget to take into account the relative motion between of the ship (which may be maneuvering) and the virtual camera (which was not typically kept stationary for these shots) ..... and things like the motion of the Aggie's rotating crew section ........ and then, of course, using that derived data to create the form of CAD file that the rendering software expects as an input (what is naturally generated based on backing elevation / distance numbers out of the visible parallax will not be in anything like that same form) ......

I can see that it might very well be easier to just re-create the CAD files from scratch ..... well, from whatever notes about sizes and shapes survived in the notes of JMS and others.

Unless you think that the special purpose software that you write to back out all of the info would actually need to be re-used multiple times for various things (which seems fairly unlikely, at least in terms of what the people budgeting for it could actually plan on), I doubt the backing everything out of the old animations would be the more economical approach.
 
As to the lost CGI files... wouldn't it be possible to scan in the existing shots, and devise a program to enhance them, to a higher definition, perhaps adding detail manually?
I don't think that it's that easy.

What I think it was that they lost was the files that were inputs to the creation of the existing shots ..... basically detaild CAD-type files that were the blueprints of all of the ship classes and such.
Both the model files (what you call cad type files) and the scene files (where the camera and model movements were laid out for the shots) were lost, I believe. Recreating the model files would actually be the easier part of the equation, I think. And some is going to be recreated for TLT anyway already. Recreating all the animation files again would be harder.

As far as using the original shots for the animation movements, there actually are ways to track 2d movements and extract 3d data from them. But it would be much easier I think, like I said before, for animators to just use the original CGI shots to eyeball the movements, and then roughly match them by hand.

If you had scenes rendered in stereo, so that you could see the parallax between the shots, then you could back out a 3-D model. You could conceivably use the motion of a couple frames of a sequence as a stereo pair, but to really do the calculations you would want to have all of the viewing geometry parameters that had been used to generate the frames. If they hadn't even succeeded in not losing the CAD files for the ships, then I find it highly unlikely that they still had the parameters used to generate each of the frames of their animations.

Backing out everything, both viewing geometry and "elevations" for each point, then extrapolating the full ship from what you could see (or piecing together the elevations generated from multiple view angles of the ship) ..... and let's not forget to take into account the relative motion between of the ship (which may be maneuvering) and the virtual camera (which was not typically kept stationary for these shots) ..... and things like the motion of the Aggie's rotating crew section ........ and then, of course, using that derived data to create the form of CAD file that the rendering software expects as an input (what is naturally generated based on backing elevation / distance numbers out of the visible parallax will not be in anything like that same form) ......

I can see that it might very well be easier to just re-create the CAD files from scratch ..... well, from whatever notes about sizes and shapes survived in the notes of JMS and others.

Unless you think that the special purpose software that you write to back out all of the info would actually need to be re-used multiple times for various things (which seems fairly unlikely, at least in terms of what the people budgeting for it could actually plan on), I doubt the backing everything out of the old animations would be the more economical approach.
Like I said, recreating models is the easy part of the equation, to some extent. It would not be that hard for an artist to recreate the various models, and the most labour intensive job would be to, I think, recreate all the specific scene files.
 

Latest posts

Members online

Back
Top