A common question we were asked during our time at VGU was how our cables worked. The actual implementation is somewhat complicated, but the concept is quite simple and can easily be understood by all.

Several consoles released in the 80's and 90's had proprietary connectors for the audio/video outputs.  If you were like me at the time, you got frustrated with these things. I know I was relieved when my launch Playstation 1 provided discrete A/V connections so I wouldn't have to deal with Sony's special connector.  It turns out that these seemingly annoying A/V connectors contained more than just the standard composite video (yellow) and stereo audio (white/red) we were used to.  In some cases, there were pins on those connectors which provided an s-video signal for improved picture quality on TVs with such a connection.  And that's it, right?  Well, yes, it was for us living in North America.  But out in Europe, all this was a sideshow to what was actually possible.

comp_svid.jpg

To understand the following, you need to be familiar with a very simple concept.  A display, such as a TV, shows an image by combining light in red, green, and blue (RGB) components.  A source, such as a camera, takes light in and splits it up into its RGB light components.  A different type of source, such as a computer graphics system, generates RGB signals directly instead of capturing them.  For optimal quality, you want this path from source to display to be as direct as possible.  Any deviations from this path can result in artifacts and visible errors on the display.  For example, to create composite video, the RGB needs to be heavily processed and combined into a single video signal. The processes involved in combining and then separating the video back into RGB causes a major deviation in the ideal path, hence the sub par quality achieved with using composite video.  The following diagram attempts to visualize this detour that composite video takes in relation to a direct RGB connection.

source_display_rgb.png

Back to the consoles.  Almost all of those funky A/V connectors had pins containing the raw RGB video generated by the graphics system of the console.  And in Europe, TVs have special inputs that can accept and display those signals.  They did this through a special connector on the television called SCART.  The connector was very large with many pins and incorporated several features to simplify A/V equipment connections.  Think of it as the analog precursor to the digital HDMI connection we are all familiar with these days.  With a special cable that connects the console RGB pins to the SCART connector, you can achieve an almost perfect source-to-display connection.

SCART.jpg

For us in North America (and several other parts of the world), we never had TVs that could accept these raw RGB signals.  But what about those red, green, and blue colored RCA jacks commonly found on our TVs?  Although physically colored that way, those are not RGB inputs, but they are very close.  YPbPr (or "component") video is only a simple, reversible transformation away from RGB.  So the idea is to perform the transformation for the purpose of using a compatible input on the TV.  Then, the TV will undo the transformation to recover the RGB and display it.  Below is one page of my engineering notebook which begins to explain the theory behind our product.  The diagram on top of the page illustrates the concept I've just explained.

ypbpr_concept.jpg

Some people say that you're just as good in getting an RGB SCART to YPbPr conversion box.  Disregarding the hassle and cost involved in dealing with conversion boxes, I still don't believe this is true.  We've spent lots of time here at HQ researching the RGB video signals coming out of both the SNES and Genesis consoles.  By using custom test software, the signals were accurately measured and we've determined that these consoles are not properly designed to follow any particular standard.  These video signals can be too dark, too bright, have slanted lines (field tilt), and contain unwanted noise.  These problems are compensated for within our cables. Think of our cables not only as simple plug-and-play conversion devices, but also as "signal conditioners" to achieve the best possible output from your consoles.

Like always, if you have any questions, please feel free to contact us via the contact page.  Also let us know if you like these more technical blog posts and if we should do more of them.

Posted
AuthorSte Kulov

We recently posted comparison videos highlighting the differences between composite video cables and our component video cables.  We chose YouTube as the video host due to popularity and accessibility.  However, YouTube does not support frame rates greater than 30fps (frames per second).  This is a hurdle we had to overcome when preparing our videos for upload.

In this blog post, we discuss a simple method for downsampling a video from 60fps to 30fps that retains the flicker and jitter present in the original 60fps video. Televisions in the United States and Japan display video at a 60fps rate, which is the frame rate provided to them by consoles like the Super Nintendo Entertainment System and Sega Genesis. YouTube, on the other hand, displays video at a maximum rate of 30fps. Therefore, when video recorded at 60fps is uploaded for sharing, it is downsampled (and possibly pre-filtered) by a factor of two. When this happens, any flicker that is produced by pixels turning on and off from frame to frame is lost.

A typical example of this might be when a character is blinking during temporary invincibility after taking damage. (See here for a real life example.) In such an example, the resulting downsampled 30fps video would either show a solid character or no character at all.  In addition, during our work we discovered a similar issue when trying to display certain types of jitter in the original video that was a result of using the composite video output used by the SNES. Because of the downsampling to 30fps required by YouTube, this jitter was no longer present in the 30fps video and we had no way of providing a representative comparison to the HD Retrovision component cables which alleviate this problem.

To solve this issue, a simple model of jitter was considered. In the following jitter model, we imagine a single row of pixels consisting of only 0's (black) and 1's (white) shifting between frames at 60fps.

Frame 0:       1   |   0   |   1   |   0   |   1

Frame 1:        0   |   1   |   0   |   1   |   0

Frame 2:       1   |   0   |   1   |   0   |   1

Frame 3:       0   |   1   |   0   |   1   |   0

Frame 4:       1   |   0   |   1   |   0   |   1

Assuming a simple scheme of dropping frames (although this will work similarly with an averaging pre-filter), it is easy to see why the flicker disappears in the 30fps video:

Frame 0:       1   |   0   |   1   |   0   |   1

Frame 2:       1   |   0   |   1   |   0   |   1

Frame 4:       1   |   0   |   1   |   0   |   1

If we imagine that frames are paired up like so [0 1], [2 3], [4 5], [6 7], ... , then the standard downsampling method is simply choosing the first frame in each pair. As above, we'd get [0], [2], [4], [6] ... for our frames. Instead, what we'd like to accomplish is to alternate which frame we choose to drop from each pair of frames. To do this, we can simply flip frames in every other pair before running the standard downsampling method. We rearrange frames as [0 1], [3 2], [4 5], [7 6], ..., so that the resulting output frames in the 30fps video are [0], [3], [4], [7], ... and so on. The result retains the jitter in our simple model:

Frame 0:       1   |   0   |   1   |   0   |   1

Frame 3:       0   |   1   |   0   |   1   |   0

Frame 4:       1   |   0   |   1   |   0   |   1

So how does this hold up in practice? Is our incredibly simplistic model at all representative of reality, or is the output an unwatchable mess of video that looks nothing like the original 60fps video? It turns out that despite the simplicity of this model, this downsampling schema works quite well. In the example video below, you can see both the flickering and jitter inherent in the higher frame rate video even though it is being displayed at a downsampled  rate of 30fps. 

Note: All 3 segments below are from video captured using the standard composite input cables and do not represent the improvements gained by using HD Retrovision cables. The right pane is the HD Retrovision downsampling schema on composite video.

Posted
AuthorNickolaus Mueller