Blog Home  Sign In RSS 2.0 Atom 1.0 CDF  

  def Softwaremaker() :
         return "William Tay", "<Challenging Conventions />"

  knownType_Serialize, about = Softwaremaker()

 Saturday, September 24, 2011
« On the speaking trail again: Generating ... | Main | WP7 GeoBlog: Geo-Blogging on the Windows Phone »

This might be a little bit different from what this blog is themed towards but it still has a slight tinge of software flavour to it.

Those that know me well will know that I have been dabbling in music for the past year or so. The sound engineering aspects of it, besides the musical genre, fascinates me with all regards to acoustic and digital. I recently had a chance to learn about the lip-sync issues that HDMI threw up. The write-up here is very good and explains why HDMI 1.2, 1.3 are are all poor bandaids on a problem that shouldn't have happened in the first place. RTP packets (in internet VOIP and video) have timestamps and packets that link those to a shared timebase so you can synchronize audio and video. It is therefore strange and unimaginable to me, from an engineering perspective, that the first version of HDMI was released without at least considering the possible variable delays on the two chains. OK, I have digressed.

In any case, I had the chance to encounter this problem straight-up recently when I wire-up all the video devices I had with HDMI because of the many HDMI options my new TV offered me. However, the audio capabilities of my AV receiver remained, at best, at an analog level.

In a nutshell, what happened, was that the the audio delivered through my AV receiver->speakers was processed, and therefore heard, lot faster than what the visuals was processed to the TV. In other words, I heard the crash ahead of the specific moment when the drummer actually crashed on the cymbals.

Contrary to popular belief, this is NOT a lip-sync issue that HDMI 1.3 was designed to solve. The usual culprit in audio lag is due to a TV's video processing, which is constantly trying to send a resolution that matches your TV's native resolution. Most of the workaounds today revolve around getting an AV receiver that allow a time-lag adjustment that enables you to set audio delay by source, in effect, allowing you to calibrate, or slow down, your audio processing to match the *slower* video processing. This works, provided you have enough dough to cough out to get a new AV receiver, with matching speakers probably.

I decided to apply some common sense and see if there is a way to *speed-up* my video processing so it can catch up with the audio processing instead. Now, I am aware that this would probably mean that you may not get the best visuals for your TV. However, to be honest, a lot of the infinite details is not visible to the naked eye, not mine anyways, so I am willing to live with that compromise.

If you are still with me at this point, you would understand that most TVs today come with a "Game-mode". It is designed to reduce the amount of processing involved in producing the image on the screen so that high-speed high-intensity graphical images can be served up fast on your TV. By speeding up the served image, it reduces input lag.

I set my TV to "Game-mode" and true enough, the *calibration effect* was applied and now my video processing could now match my audio processing. The graphics are still superb as visible to my naked eye, just less vivid, which is not something you would care about while watching a live concert DVD, etc.

Till I decide to plonk down money to get an AV receiver that allows me to set a time-lag/delay for my audio-processing, this *free* workaround actually works well and will suffice for now.

Saturday, September 24, 2011 3:11:27 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions