December 30, 2005

Why I love FFMPEG: Flexibility, interoperability, encoding options.

Why I love FFMPEG ? The first reason is the flexibility, interoperability and the rich set of encoding options. FFMPEG is an open source project, it's free and works well both on Windows and Linux. I can use this tool to create FLV files starting from a wide range of video and audio source files. I can use it to convert one shot or in batch both on desktop and server environment (exist also a PHP extension to handle this last possibility). FFMPEG supports: AVI, WMV9, DIVX, MP3, MPEG1-2-4, DV, MJPEG, H.263, H.264, 3GPP, AMR, FLV and many others.

It's not surprising to know that Google Video (http://video.google.it) uses FFMPEG extensively to convert any sort of video sources to FLV. The cost (0), the options and the performances offered by this free tool have definitely convinced Google's developer.
The lack of support for the last flash video codec (VP6) its the only negative spot. Unfortunately, being the VP6 a proprietary format, I think we will never see such a support in FFMPEG. On the other hand, at this moment it's impossible to do more of what FFMPEG can do with other solutions and at a better price/performance ratio. Using server side On2's solution or batch Sorenson conversion to produce VP6 videos is possible but not always convenient. Infact FFMPEG is much more fast, it's free and, at the moment, it's the only solution to re-compress a FMS (FCS) recorded FLV.

Let's take a look at FFMPEG command line:

FFMPEG.exe -i inputfile.xxx [parameters] outputfile.flv

To convert video files to FLV, the most important parameters are:

-b, it's the average bitrate. see also -maxrate -bufsize.
-r, frame rate. use -re to read input at native frame rate.
-s, frame size. ie: -s 320x200.
-maxrate, use this instead of -b for low bit rate video.
-minrate, set the minimum bitrate. Set maxrate=minrate=b to have a costant bitrate.
-bufsize, set (in KByte/s) the size of the buffer used to control the average bandwidth.
-sameq, convert video using the same per macroblock quantization of source.
-pass n, to launch the first pass setting use -pass 1, and for the second -pass2.
-g, distance beetween keyframes.
-i_qfactor, use it to set a difference in quantization beetween p frame and keyframe
-qscale, to set a fixed macrobloc quantization (range:0-32, 0 better quality, 32 worst)
-qmin, minimum quantization (max MB quality), try a value higher than 3-4 for low bitrate
-qmax, maximum quantization (min MB quality), default 32.
-me, motion extimation. Default is "epzs", try the "full" value for a little more quality.
-deinterlace, deinterlace the source probably using Bob technique.

-ar, audio frequency (5500, 11000, 22000, 44000 samples/s)
-ab, audio bitrate in kbit (160, 128, 96, 80, 64, 48, 32, 24, 16)
-ac, number of channels (1-2)
-acodec copy, leave the audio tags untouched.

Depending by the setting, it's possible to encode in variable and costant bandwidth, it's possible to control max and min quality level, it's possible to encode with a specific prebuffer in mind. It's possible to deinterlace (standard Flash8's encoder completely lacks this feature).

Spark encoded FLV are still good for hi bitrate (600-700Kbit/s depending by the contents of the video source). While are clearly inferior to VP6 for low-med bitrate (expecially under 350Kbit/s). To better encode at low bitrate I suggest to cut frame-rate, limit -qmin (try use 5-6 or higher instead of the default 2), set -maxrate istead of -b, or use a wide prebuffer (-bufsize). Try also double pass encoding (which offers alternate results) and an i_qfactor of 2-4.

Fabio Sonnati ::



December 27, 2005

Why I love FFMPEG: Intro.

As any other developer involved in Flash Video applications developing, I'm very happy with VP6, the new Flash Video codec. VP6 is a state of the art codec, capable to encode video very efficiently for a wide range of target bandwidths.

The old Spark codec (basically a H.263 codec) isn't very efficient below 300-400Kbit/s. VP6, istead, has a much better capability to retain information from previous frame with much less bytes. With very low bitrate, the new codec is still capable to retrieve informations from previous frames efficiently.

But, unfortunately, bandwidth efficiency is not always the first and the most important required feature. Often, in projects involving video encoding, we need other key features like flexibility, encoding speed, interoperability. The Flash8's standard VP6 encoder is more than sufficient to generate Vp6 FLV for a web site but for more complex applications we need more, much more.

It is for all these reasons that I'm still using, with success, spark coded FLVs encoded by FFMPEG. FFMPEG gives us what we need, unfortunately enough with the exception of a VP6 output.

Since my first encounter with FFMPEG, I have always use it for FLV encoding in the old Spark format, because FFMPEG, after a short 'training' it is a very valuable piece of software.

It produces really a good quality output. Infact, it uses one of the best H.263 encoding routine. It is possible to encode a very wide range of source formats (Avi, divx, mpeg, 3gpp, Flv) choosing resolution, fps, keyframe interval and average bitrate. It's also possible to choose max and min bitrate, max and min macroblock quantization (max and min frame quality), motion estimation strategies, single and double pass encoding. It is usable in server side environment, or in desktop application. It's blinkly fast and did I mention that it is open-source and therefore free ? What do we want more ? The only defect is the forementioned lack of VP6 encoding. But, in a lot of applications it's much more important the automatization, the speed and the cost of the encoding than the absolute quality. I can mention the new Google video beta service where a wide range of source video formats are compressed server side in an unique format (FLV) using FFMPEG. In my last projects, for example, was imperative to encode video programmatically. It's true that its possible to buy a On2's library to encode VP6 but: it is expensive (2500$ per server in the case of a server side encoding solution or 4000$ + revenue for stand alone applications), it is slow and it isn't able to recompress FlashCom (FMS) encoded FLVs.

In a few next Blog entries I'll talk to you about some key-features and case histories involving this great software. I'll talk about best practice settings, FLV recompression, 3gpp to FLV conversion and about FFMPEG's amazing encoding speed.

It's all for now, stay tuned!

Fabio Sonnati ::



December 3, 2005

TechNote :: FMS Dynamic buffering strategy

In this TechNote I'm going to talk about the buffering of a FMS (or FCS) stream and about the implementation of a dynamic buffering scheme for recorded streams.

How FMS buffering works


Subscribing a pre-recorded FMS stream, it's possible to define a subscribing buffer with the method
netStream.setBufferTime(N). The buffer behaviour is indeed very simple and similar to that of other media streaming servers. The FlashPlayer receives the stream and archives it until the buffer is full, at that moment, the stream starts to play and the Flash Player tries to keep the buffer full to its nominal lenght. If the bandwidth is insufficient, the buffer slowly decreses; when the buffer is empty, the playing is stopped until the buffer reaches again the desired lenght. The picture below illustratse the buffer behaviour related to an ipotetic profile of available client-server bandwidth.

The first profile illustrates an ipotetic client-server bandwidth normalized on the required stream bandwidth. The second profile reports the buffering lenght normalized to a required value (es: 4 sec). At the beginning, the buffer starts to fill linearly because the playing is not yet started and the bandwidth is constant. In time T1 the buffer is full, the stream start to play and the buffer is keeped full by a bandwidth greater than the required (Available bandwidth > 100% of the required by stream). In T2 the available bandwidth becames insufficient and the buffer starts to empty. In T3 the buffer is definitely empty, the stream is paused and therefore the buffer restart to fill almost linearly because of the almost costant bandwidth (notice that the refill time is more than doubled because of the halved bandwidth). Reached the full state, the stream exits from pause and the buffer lenght is the result of the buffer flushing (dued to the playing) and the buffer filling (dued to the available bandwidth). In time T5 the available bandwidth reach the 100% minimum required value and the buffer becames to fill again up to the full status (T6).

We have seen that with this standard, static buffering method, if bandwidth decreases under the required value, the buffer may be insufficient to compensate the bandwidth void and one or more rebuffering may be necessary to get over. Obviously with a bigger and deeper buffer, this could not happen.

We must indeed choose a buffer depth thinking that:

1. to prevent rebuffering, the buffer must be deep. A deeper buffer can compensate a longer time lapse with insufficient bandwidth.

2. a buffer to much deep means an higher buffering time and probably a worst viewer experience.

Why then not to use a dynamic buffer instead of a static one ?

The rise of the dynamic buffering

Using the onStatus event of our netStream object, we are able to recognise when the buffer is full or empty. So, we can set a starting buffer lenght and then, reached the buffer full status, we can set it to an higher value to exploit the bandwidth eventually in excess. If the buffer goes empty, we can lower the buffer lenght again to the starting value. The code is quite simple:


// Init
...
startBL=2; mainBL=15;
in_ns.setBufferTime(startBL);
in_ns.onStatus = Status;

function Status(infoObject:Object) {
  if (infoObject["code"]=="NetStream.Buffer.Full"){in_ns.setBufferTime(mainBL);};
  if (infoObject["code"]=="NetStream.Buffer.Empty"){in_ns.setBufferTime(startBL);};
};
Let's take a look at the dynamic buffer behaviour:

Compared to the previous behaviour, when the Starting Buffer (SB) is full, the buffer lenght is enlarged to exploit the amount of bandwidth beyond the required. Until the time T2, when available bandwidth decreases under the 100%, the buffer continues to fill. With this "supply" of video, the lack of bandwidth between time T2 and T3 is handled without video interruptions. At time T3 bandwidth returns over the 100% and the buffer grows again.

The dynamic buffer can guarantee short starting time (using a low startBL) and at the same time an arbritarily high resilience to bandwidth fluctuation. Dynamic buffering is more usefull when average bandwidth is higher than the required (a very common scenario). When average bandwidth is equal to the required, it is still usefull but starting buffer must be deep since the beginning with less advantages.

If the video is often seeked, may be usefull to set a mainBL not too high.

Dynamic buffering in progressive downloaded video is useless, because the download is inherently dynamic, infact while the buffer fills, the whole flv is cached by the player, even beyond the buffer.

Fabio Sonnati::



November 13, 2005

New Screen Capture driver compatible with Plash Player

I have just discovered a very impressive Screen Capture driver compatible with Flash Player. VHScrCap is a driver capable to grab the screen at any resolution and any fps and to produce a video stream WDM compatible (and therefore usable by Flash Player to stream the screen to Flash Media Server 2).

This driver is very fast, (expecially compared to Camtasia). It absorbs only 20% cpu power to grab and broadcast an 800x600 5Fps screen on my Athlon 64 3500+.

It can track a specific window, the entire screen or a specific portion of the screen. It Can be configured with an XML config file and in any case retains the last setting.

Last but not least, it is free for personal use (and reasonably cheap for other uses).
If you was in search for a valid alternative to Camtasia. This driver is the answer:

VHScrCap: http://www.hmelyoff.com

Fabio Sonnati::



October 16, 2005

Flash video real-time deinterlacing
, updated.

This short entry to say that in the last 2 week I have tried to improve the realtime deinterlacing method I described in the last post.

Indeed the bend&enhance filter previously presented leads to very good results, but a blending introduce inevitably a partial loss of details. The simultaneous enhancement filter enhance the overall appearance (perceptual enhancement) but can't rebuilt lost details.

The best would be to filter the image zones where it is necessary (the zones where there are motion between fields and therefore interlacing comb artifacts) and leave untouched the other zones. In this way, the lost of detail (inevitable) would be present only in high motion zones.I think, I have found an interesting procedure to achive that goal.

Let's apply a deinterlacing filter on a copy of video and leave untouched the original video. Let's name the original oVideo and the deinterlaced one dVideo.
Now filter the original video with a vertical edge filter opportunely "tuned". This will generate a new video in back and white where white pixels are related to hi-motion zones. Now let's convert this eVideo (edge Video) in an Alpha Channel used to properly mix the previous dVideo and oVideo. What the final result is? Where the alpha channel is white, it will shows the dVideo, and where it is black it will shows the oVideo. This approach should preserve the original quality in the zones without motion between fields. Obviously to apply 2 filters and an Alpha Channel blending, requires a lot of CPU processing and is usable only for low FPS video. Indeed I use low fps quite often, for example when I grab a computer video with a scan-converter or when I use a document camera.

I hope to be able to publish the final source code soon. In the meanwhile look at the picture of the filter than identify motion areas:

 

Fabio Sonnati ::


September 26, 2005

TechNote :: Flash real-time deinterlacing best-practice.

In the last 3 years I have had to manipulate Hi-quality and Full-res video sources with Flash and Flashcom. For Hi-quality videos I mean Full pal sources: 720x576 - 25 Fps.
Usually the source was the output of some BioMedical device such as an Ecocardiograph or other video surgery devices but even Document Cams. The main problem in grabbing a Full-pal (or Full-Ntsc) source is the interlaced nature of the video signal.

In standard video signal, frame are composed of two different fields. The first field gives the odd frame's lines while the second field gives the even frame's lines. The problem is that the two fields of the same frame are sampled in different times. In short: a 25 fps video is indeed the result of 50 samples in time. A couple of samples (a couple of fields) form a single frame.

To Deinterlace a video source is indeed a very complex work, but if you want to visualize the final result on a PC screen You have to deinterlace it in some way or I'll get a very poor result, expecially in motion scene. (If you want some other explanation about interlacing read this: http://www.100fps.com/)How to solve this problem? At first I serched for some external devices, but the cost for them was really high (they are used mainly for video broadcast and price start from 40,000$). Another solution was to acquire video from a PC-TV card with software like dScaler (it deinterlaces) and then acquire in realtime the screen using Camtasia. Even this wasn't practical because of the huge CPU requirements necessary to grab 800x600 in 25Fps using Camtasia.

At last I found a pactical trade-off. A Flash de-interlacing solution...

How to de-interlace video in real-time :: old method

I noticed (and I think you too) that Flash Player interpolates every video pixel over the flash screen grid if you enable smoothing in video object. Therefore, if you set non exact coordinates in the video object, Flashplayer will introduce a little blurring.
If you set the coordinates like these: X:0 Y:0.5 , Flash player will distribute pixels of the same video line, exactly between two subsequent screen line.
The final effect is a vertical fusion between pixels. If the video is playing a interlaced video source, the final result is a consistent reduction of interlacing at the cost of a limited drop in vertical resolution. This "Vertical Blending", indeed blends the two fields together.
This work with Flash Player starting from version 6.

Now, with Flash Player 8, is possible to do better and with better control using Convolution Filters.

How to de-interlace video in real-time :: new method

Flash Player 8 introduces a very interesting and flexible set of image manipulation features. One of my preferite is the Convolution Filter. It is not very simple to explain how a Convolution Filter works...we can just say that it blends multiple pixels using weights.
With a proper filter, is not only possible to blend the fields of an interlaced source, but it is possible to simultaneously enhance resolution, to obtain a better final result.

A convolution filter like this:

0 1 0
0 1 0
0 0 0

Simply blends two pixels vertically, while a filter like this:

 0  8  0
-1  8 -1
 0  0  0

Blends pixel vertically and enhance resolution orizontally. But a more complex and efficient filter is that:

-1  4 -1
-2  8 -2
-1  4 -1
The code is this:

filter = new flash.filters.ConvolutionFilter();
filter.matrixX = 3;
filter.matrixY = 3;
filter.matrix = [-1, 4, -1, -2, 8, -2, -1, 4, -1];
filter.divisor= 8
movie.filters = [filter];//movie is a movieclip containing the video object to deinterlace

Lets take a look at the final results:


Original Interlaced video A


real-time deinterlaced video A

  

Interlaced video B                         Deinterlaced video B    


Interlaced Video C


Deinterlaced Video C

As you can see, the result are really good. In recent implementations I have used an adaptive filter that modulate the deinterlacing effect depending from the motion level of the source. This can preserve resolution in quite scene.

The real problem of these deinterlacing techniques is the efficiency of this approach for a real-time broadcast. The deinterlacing filter, must be applyed on the client that subscribe the interlaced stream, while the publisher is acquiring the source interlaced.

But the video compression of an interlaced source is much less bandwidth efficient than a deinterlaced one, because of the extra amount of information required to compress the rich vertical frequencies coefficients. In short: to stream an interlaced source requires a lot of bandwidth. This is the real problem.

A recent solution

In my endless quest for efficient video grabber, I recently have found a really interesting frame grabber: Terratec Grabster AV250. Indeed I have used for two years the previous model, the Grabster AV200, a very good USB2.0 frame grabber.

AV250 acquires the video source throw a S-Video connection, converts it with 10bit per color per pixel without compression and send an huge 200Mbit/s uncompressed stream to the computer using USB 2.0 connection. The SNR and color separation are superb for a 150$ grabber. But the really unique feature of AV250 is the possibility to deinterlace video at *Driver Level*. This means that Flash can acquires and stream a really deinterlaced video source with a much better efficiency.

AV250 can deinterlace the source with 2 approach: an high-motion mode, very similar to that previously described, and a low-motion mode, a really progressive deinterlacing technique that preserve resolution but produce some little artifacts in high motion scene.

Fabio Sonnati

::
August 17, 2005

D-Frames explanation and D-Killer utility

Several readers of my Flash Video Technology and Optimizations whitepaper have asked me to give more details about D Frames. To satisfy this request I have prepared a detailed explanation and more over a little gift: a tool to delete D-frames from a Flashcom recorded FLV. To better understand this article I suggest you to read the paper (here)

What are D-Frames ?

Understand Flash Spark Video Technology will be very usefull also after the release of Flash 8 which will give us a new video codec (VP6). This because Spark codec will be still used for realtime video encoding (obviously in join with Flash Communication Server). Infact VP6 can be decoded by FlashPlayer but not encoded (see my post of July 22 about that).

Spark video technology is derived from H.263 standard with some differences. One of this is the substitution of B-frames with D-frames. Therefore, Spark frames may be of one of three types: Keyframe(I), Prediction frame(P) or Desposible frame(D).

The use of D-frames is not mandatory for encoders. Encoders like Sorenson Squeeze or Flix or Flash encoder or Ffmpeg don't use D-frames and the frame sequence is like this:

I - P - P - P - P - .... - P - P - P - P - I - P - P - P ...

This frame sequence start with a Keyframe, which is a complete JPEG-like picture encoded without reference to other frame, followed by an arbritary number of P frames, which are frames encoded as difference from the previous I or P frame. Keyframes are much larger (in terms of memory requirement) than P frames but are necessary as entry points to the stream. P frames form a chain, if a P frame is dropped or lost because of bandwidth problems, it is impossible to rebuilt the following P frames. The decoder must wait for the first successive Keyframe. The balance between P and I frame is a well known problem: Keyframes too distant may cause slow resynch and then bad video experience; Keyframe too near may cause an excessive rising of bandwidth requirement (to obtain the same quality).

To reduce this problem, Spark video technology proposes the use of D-frames. Disposable frame are very similar to P frames: they are encoded as difference from the previous I or P frame but cannot be reference for other frames (neither P nor D). This means that D frames are attached to a chain of P frames and singularly a D-frame may be discarted because it is never used as reference frame by other frames.

The typical frame sequence looks like this:

I - D - P - D - P - D - P - I - D - .... - P - D - P - I - D - P - D - P - D - ...Lets take a zoomed view to the first frames:I - D0 - P1 - D1 - P2 - D2 - P3 - D3 - I - ...This may be the case of a stream at 8Fps with a Keyframe (I) every 1 second (8 frames).
D0 is decoded using the previous I frame as reference.
P1 is decoded using the previous I frame as reference and not the D0 frame
D1 is decoded using the previous P frame as reference
P2 is decoded using the previous P frame (P1) as reference and not the D1 frame

What is the advantage ? If the bandwidth is too low to guarantee a 8fps streaming, it is possible at server level to "Discard" the D-frames to obtain a stream with a lower FPS, ranging from 7 to 4. Also without D frames, the chain between I and P frames is still intact and the stream can be perfectly decoded. 4 FPS is the lower limit for the example above because isn't possible to drop P frame without jump to the next keyframe...

This approach (the use of D frames) has, obviously a little drawback. As you may have nothiced, D frames use as reference the previous frame, but a P frame uses as reference the previous P distant two frames and not one. This means that the motion prediction in P frame is less efficient than that of the canonical sequence without D frame. To use D frame lower the complessive encoding efficiency and require more bandwidth at a given quality level (probably a 5-10% or even more in hi-motion scene).
Real-time video encoded with Flash Player and sent to a Flash Communication Server, always uses D - Frames. Therefore, FlashCom can discard D-frames in a realtime stream to suite the real bandwidth with the subscribing client.

FLV D-Killer utility

Let's conclude this article with the publishing of my first "public" utility for FLV videos.
FLV D-Killer is a very simple tool that erases D frames from a FLV source (It works only on FlashCom recorded FLV). This is usefull to reduce the size of a video recorded with FlashCom. Obviously the erased D frames are not replaced and the final FPS is halved. Consider this scenario: I record a stream with flashcom with 24fps and 350Kbit/s of target bandwidth. If I want a different bitrate for client with lower bandwidth how can I proceed ?
The only encoder capable to re-compress a FLV is Ffmpeg, but if original FLV has an audio track, Ffmpeg isn't capable to do the trick without loose the audio track (because of the Nelly Moser codec). Now you can use this tool.

Picture above shows the typical frames' weights for, each frame type, in a realtime encoded spark video. Notice the high weight of keyframes and that D frames are always a little smaller than the following P frames.

Usually erasing the D-frames from an FLV reduce the size from 30 to 40% (depending by the frequence between keyframe). Therefore the original FLV with 24Fps and 350 Kbit/s became a FLV (without D-frames in it) at 12Fps and 200Kbit/s. Obviously an FLV encoded with spark, ffmpeg and others encoder doesn't benefits of the shrink.

You can find the tools here:
FLV D-Killer

Try it and send me some feedbacks...


Fabio Sonnati


::

July 22, 2005
Flash Player 8 lacks VP6 realtime encoding

I quote a recent interview of Doug McIntyre, CEO of On2 Technology: "...Macromedia paid for the rights to distribute [On2's VP6] decoder with Flash. We retained the rights to sell the encoder. Our targets with that are the million or so Flash developers and major publishers to use our video compression codec. On the Flash server side, we have the codec for compressing the video...".
What does it mean? It means Flash Player 8 lacks VP6 realtime encoding. Flash Player 8 can only decode VP6 movies. On2 retains the rights and revenues over VP6 encoder which will be included (probably) in the IDE, in On2's proprietary encoding tools and, presumably, in the future Flash Communication Server ("...on the flash server side, we have the codec for compressing video..."). You find the Interview ::here::

Fabio Sonnati

::
July 14, 2005

VP6 is near...

As you know Macromedia has released the Flash Player 8 for Beta testing. 8Ball is very near and with it, the new Flash video codec derived from On2's VP6 (it is not clear if we will so lucky to see also VP7 in action).What is VP6?VP6 is a H.263++ / H.264 class codec. A state-of-the-art codec which produces a Peak Signal-to-Noise Ratios (PSNR) that are declared to be significantly better that Windows Media 9 and H.264. However, the most important feature of this codec is its small footprint and the hi-speed in both encoding & decoding. On2 claims it capable of D1 real-time encoding with a 1.5 GHz cpu. VP6's decode complexity is the same as MPEG4 in fastest profile and considerably lower (50%) than H.264 (in fast profile too).

To better understand the VP6 features I suggest you to read my previous White Paper over
Video Compression Standard. :: VP6 Technical Features :: VP6 starts from classic H.263 basic principles and embraces On2's specific optimizations to achive H.264 performance with much lesser encoding and decoding complexity. VP6 has the classical 8x8 points DCT but introduces some technique to improve Keyframe compression ratio.

In motion compensation uses quarter-pel motion, long motion vectors, improved compensation in UV planes. VP6 doesn't use B-frames and a specific technique is dedicated to low-order frequency coefficients prediction. Probably VP6 can use 2 reference frame but both in the past (no B-frame used). In both intra and inter frame compression are employed improved and adaptive quantization strategies to preserve quality.

VP6 makes use of sophisticated context modeling in the entropy encoder: The algorithm uses prior coded data to optimize how subsequent frames are encoded. The information can come from lower frequencies in the same general location, information from already coded areas in the same frame, and even information in prior coded frames. At the start and at the end of the processing chain is possible to apply filters to reduce noise in encoded video and to enhance quality in decoded video (block and ringing artifact removal).

VP6 standard codec uses three separate techniques to guarantee that you hit the desired datarate: quality frame modulation, spatial resample, frame dropping; it automatically adjusts quantization levels, encoded frame dimensions or drop frames.

VP6 Theoretical features are very promising. ***
Unfortunately, we won't see a real-time VP6 encoder in Flash Player 8 because the deal with On2 is for the decoder only. On2 retains the rights and revenues for the encoder that will be probably selled with Flash IDE, Flash Communication Server (2.0) and other On2 proprietary encoding tools. See July 22 Update ***

Fabio Sonnati

::
July 2, 2005
Flash Video Technology and Optimization is out

The first contribute to this site is the publishing of a Technical WhitePaper about FlashVideo Technology. The paper has 3 chapters, in the first I introduce the basic international standards for video compression (H.261,H.263), in the second I analyse the Flash implementation (Flash codec derives from H.263), and finally in the Third chapter I describe 2 optimization strategies I have developed in the past to improve the Flash real-time video compression of around 20-30%. Here you find the paper.

Fabio Sonnati


::
July 1, 2005
Welcome to my Blog !

Hello to all FlashCom developers and welcome to my Blog. I created this page to share my Flash Communication and Flash Video experiences with other coders.
I hope you will find here interesting tips and tricks about this exceptional Macromedia product.

Flash Video and FlashCom, it's dedicated delivery platform, will evolve in the near future and I think these technologies hold a huge potentiality, the future of Internet interaction will be dominated by the flexibility, ubiquity and easiness of Macromedia Flash Platform.As any other development platform, Flash offers a lot of tools and instruments to build applications but there is always room for users developed improvements and optimization.

We are here to try and optimize and I offer my almost 3 years of FlashCom coding experience to reach this goal. Good reading.

Fabio Sonnati

mini-spot: automativa, il programma per il controllo delle partite IVA

My name is Fabio Sonnati. I'm a freelance ICT consultant graduated in Electronic Engineering. I'm Lead Developer and co-founder of Progetto Sinergia, a Team of ICT consultants specialized in Internet, Video, and Multimedia applications.
This is the blog where I want to share my knowledge and tools about Flash Media Server and Flash Video. Read this to know more about Flash Video Factory, our Flash Video based streaming platform.There You'll find my Italian Blog, too.

WHITE PAPERS:

:: FLASH VIDEO TECHNOLOGY
White Paper
NEW

TOOLS:

:: FLV D-Killer
:: 
(read more here)

ARTICLES FOR ADOBE:

:: Implementing a dual threshold buffering strategy in FMS2

ARTICLES ON THE BLOG:

:: 2010
:: 2009
:: 2008
:: 2007
:: 2006
:: 2005


RSS Feed :



s

 


MX Developer's Journal Blogger :

FlashVideo Blog - Optimizations and Tools
F:V:F - Flash Video Factory
Copyright 2005 - Progetto Sinergia