In a recent post about running iOS apps tests from the terminal I used an animated gif to show the difference of output between different modes. I did everything using built in OS X technology and free and open source tools, so I thought I share the workflow I used.

xcodebuild test with xcpretty output

The process is:

  1. Use QuickTime to record the screen
  2. Use ffmpeg to extract frames images from the video
  3. Use convert, part of the ImageMagick tool, to create a gif out of the sequence of frames

Record the video

This is pretty easy, just open QuickTime, go to File > New Screen Recording, or just hit Ctrl Cmd N. You’ll be asked to drag to select the area of screen you want to record, but you can simply click and the recording will start.

Once you’re done just tap the icon with the “Stop” button in the menu bar, or right click on the QuickTime app icon in the dock and select “Stop Screen Recording”.

You’ll then see the recorded video so you can save it where you want.

Extract the frames

ffmpeg -i screen-recording.mov frames-folder/f%03d.png

ffmpeg is a very fast video and audio converter that can also grab from a live audio/video source. It can also convert between arbitrary sample rates and resize video on the fly with a high quality polyphase filter.

The tool synopsis is: ffmpeg [global_options] {[input_file_options] -i input_file} ... {[output_file_options] output_file} ... so the line above simply says “take the .mov as input, and output it’s frames into the frames-folder”.

How is this possible? I don’t really know, it just works. Reading till the end of the manpage…

You can extract images from a video, or create a video from many images:

For extracting images from a video:

   ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg

This will extract one video frame per second from the video and
will output them in files named foo-001.jpeg, foo-002.jpeg, etc.
Images will be rescaled to fit the new WxH values.

So I guess that ffmpeg is smart enough to notice that we’re asking for an output formatted in a way that it expects multiple images, and therefore it will extract the frames.

Merge the frames into a gif

convert \
  -delay 5 \
  -loop 0 \
  -layers Optimize \
  -layers RemoveDups \
  -fuzz 5% \
  frames-folder/f*.png \
  screen-recording.gif

The -loop 0 option makes the gif looping forever. I can’t find documentaton for this in convert’s manpage, but it’s the expeted behaviour from the argument of a looping function, and it works.

-layers Optimize uses the Optimize method to optimize the image. This is a quite recursive way to say that
a set of reasonable optimization will be performed on the gif. I haven’t gone in the details of those, but you can read about them here.

-layers RemoveDups is used to remove any extra frames, either that were already there or generated by other optimizations. You can read more about it here.

The -fuzz 5% option is a trick to make the gif size smaller. Looking at the manpage we read: -fuzz distance - colors within this distance ae considered equal. Of course the higher the distance the worst will the gif look. In cmy case being most of it white output on the black background of the terminal values around 10% where still acceptable.

The two last arguments are input file(s) and output. Like for ffmpeg we see that convert is smart about understanding what it’s supposed to be doing based on the output it gets.

Aaaand that’s it! If you found this post helpful to make a gif tweet it to @mokagio.

Food for thought

  • This process has quite a number of steps involved, is there any app, even paid ones, that can make this easier or smoother?
  • What other ways to optimize the output could we use?