Choose a page
Streaming & Web Video
Video and audio technology have become an important piece of our everyday. Video technology enables us to watch TV, play video games, YouTube, while audio technology enables us to listen to music, radio and talk to anyone using our mobile phone.
Video and Television History
Video technology has its origin from the invention of the TV and the film camera. Television was not invented by a single inventor, but was rather a result of joint development over the years of research. TV signals started out as analog signals back in the early 1900’s. The signals stayed analog for 100 years until 2002 when the first digital broadcasts went public.
Video Technology Timeline
- In 1831, Joseph Henry’s and Michael Faraday’s work with electromagnetism made the basis for television technology.
- In 1862, Abbe Giovanna Caselli invents his Pantelegraph and becomes the first person to transmit a still image over wires.
- In 1900, the The word Television appeared for the first time. At the World’s Fair in Paris, the first International Congress of Electricity was held. That is where Russian Constantin Perskyi made the first known use of the word “television.”
- In 1927, Bell Telephone and the U.S. Department of Commerce conduct the first long distance use of television.
- In 1941, The FCC releases the NTSC standard for black and white TV.
- In 1946, Peter Goldmark, working for CBS, demonstrated his color television system to the FCC.
- In 1948, Cable television is introduce.
- In 1950, The FCC approves the first color television standard which is replaced by a second in 1953.
- Vladimir Zworkin developed a better camera tube called the Vidicon.
- In 1956, Ampex introduces the first practical videotape system of broadcast quality.
- In 1996, The FCC approves ATSC’s HDTV standard.
- In 2004, LCSs and Plasmas replace the CRT TV.
- In 2010, AMOLED and 3D TVs entered the market.
- in 2013, 4K resolution TVs entered the market .
YUV color space is the color encoding system used for analog television worldwide (NTSC, PAL and SECAM). The YUV color space (color model) differs from RGB, which is what the camera captures and what humans view. When color signals were developed in the 1950s, it was decided to allow black and white TVs to continue to receive and decode monochrome signals, while color sets would decode both monochrome and color signals.
Luma and Color Difference Signals
The Y in YUV is defined as “luma,” which is brightness, or lightness, and black and white TVs decode only the Y part of the signal. U and V provide color information and are “color difference” signals of blue minus luma (B-Y) and red minus luma (R-Y). Through a process called “color space conversion,” the video camera converts the RGB data captured by its sensors into either composite analog signals (YUV) or component versions (analog YPbPr or digital YCbCr). For rendering on screen, all these color spaces must be converted back again to RGB by the TV or display system.
Mathematically Equivalent to RGB
YUV also saves transmission bandwidth compared to RGB, because the chroma channels (B-Y and R-Y) carry only half the resolution of the luma. YUV is not compressed RGB; rather, Y, B-Y and R-Y are the mathematical equivalent of RGB. See color space conversion and YUV/RGB conversion formulas.
Composite Video and S-video
The original TV standard combined luma (Y) and both color signals (B-Y, R-Y) into one channel, which uses one cable and is known as “composite video.” An option known as “S-video” or “Y/C video” keeps the luma separate from the color signals, using one cable, but with separate wires internally. S-video is a bit sharper than composite video.
When luma and each of the color signals (B-Y and R-Y) are maintained in separate channels, it is called “component video,” designated as YPbPr when in the analog domain and YCbCr when it is digital. Component video is the sharpest of all.
Interlaced Versus Progressive video
Interlaced video has been around as long as there have been televisions with cathode ray tubes (CRTs). All analog television standards are based on interlaced video, and we are so used to it that it is not something that we typically think about. High definition video can be either interlaced, or progressively scanned. Progressive scan video is scanned from side to side, top to bottom: line 1, line 2, line 3, etc, up to the end of the frame. Interlaced video is scanned from side to side, top to bottom, as well. The difference is that every 60th of a second, every other line making up the complete frame is scanned. Then, a 60th of a second later, the lines in-between the lines that have already appeared are scanned. Effectively, half the picture’s vertical resolution is sent in the first 60th of a second, and the second half is sent in the second 60th of a second. When the video is played back, the whole thing happens in reverse, giving the appearance of a complete frame. Each of these “halves” of a frame is called a “field”.
The effect to the viewer is quite distinct. First, the image doesn’t flicker as much as it would if it was a simple 30 FPS progressive scan. This is because, to the viewer, it looks as if they are seeing 60 frames per second. Of course, what they are actually seeing is 60 fields per second; but for flicker reduction, the effect is the same as seeing 60 frames.
720p and 1080i Formats
Progressively scanned high definition video normally has a resolution of 1280 by 720 pixels, normally referred to as “720p”, where the “p” stands for progressive. Likewise, “1080i” is interlaced video with a frame size of 1920 by 1080 pixels. Interlacing is actually a form of compression. Offsetting the two fields making up a frame by half a frame’s duration, halves the total amount of information needed to transmit or store the video. Uncompressed interlaced high definition video generates around a gigabit per second. Without interlacing, the rate would be twice that.
Digital video has many benefits over analog video. Digital video was introduced to increase the quality, edit ability and provide richer contents. Among the early digital video standards were the DV standards, introduced on digital video cameras, and the DVD standard, containing. Both of these standards still offered the same resolution as the analog world, restricted by the CRT monitors resolution.With the introduction of LCD, DLP and Plasma televisions, resolution was no longer a limitation.
Digital video is highly data intensive. A standard resolution bit stream translates to around 30 Mb/second. Through the existence of digital video, a range of video compression standards has been introduced. Improvements have lead to a successive development of new standards.
Disadvantages of Interlaced and Progressive
Interlaced video does have some disadvantages in comparison with progressive. It i’s prone to causing “artifacts”, which look like a kind of “comb” effect, especially in slow motion or in still grabs from moving video. They are caused by the relative movement between two fields that make up a single frame. It is a fact, though, that 1080p, (1920 by 1080 progressive), which might appear to be the perfect format for high definition video, would generate too much data for current consumer technology, so we might have to wait a while for that.
- 1990: MJPEG (DV)
- 1995: MPEG-2 (DVD)
- 1998: MPEG-4 (Streaming, CCTV)
- 2002: MPEG-4 Layer 10 (video conferencing)
- 2003: AVC HD (camcorder and television standards)
Digital video has enabled the introduction of high definition video to the consumer industry. Currently, almost all TV’s that are sold has the high definition stamp. High definition is currently rated as 720p, 1080i and 1080p. The future holds further resolution improvements, such as 4K video.