Any questions about anything I have ever posted. send me an email to ln:OFNJ(sgrlibvruszghuszk.vbjfk.bjkf.bjsuVy,j


Check out my laser gun.

Here is the laser gun I downloaded a ricochet from first com, increased volume to the point of distortion, eq’d highs and lows, combined it with a slowed down square wave using vari-fi and the signal generator, a shaker, and a duplicated the final sound with ping pong delay. over kill i know but I like it. CHECK OUT THE LINK BELOW!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Mark Echo’s Getting Up…..

Marc Ecko’s getting up is one of my favorite games of all time for one reason, that reason being the games main focus, “the suppression of creativity”. This game was more to me than just the game and now looking back on it I realized did it taught me more then I actually thought. For me being a kid playing this game, and seeing the main character fighting to have the creative control to spread his artwork throughout his neighborhood to be recognized and to be known as one of the greatest to ever touch a paint can was a real inspiration. The sound design in this 2 minute cut scene isn’t the best but it does a fantastic job of encompassing the actual meaning of the game, the spray can sounds are somewhat repetitive, and in some places the composed music gets to loud. The music was composed by one of Wu-tang clan’s founding members and one of the most gifted producers of hip hop of the last two decades “The RZA”. The dialogue is pretty minimal in this cutscene but throughout the game itself and many of the other cut scenes, it’s spot on. The main character is voiced by Talib kwali one of the greatest rappers of all times. This cut scene was unique to me because it shows the main characters triumph in the end.

Nintendo… The big leagues

World renowned there great visual artwork, story telling, and ability to capture the players attention, Nintendo is by far one the greatest gaming companies in the world. Their main headquarters is located in Kyoto, Japan, and from that headquarters many games such as Zelda, Mario Brothers, Metroid, and Tetris were created and their names are still echoed to this day. Nintendo employs more than 5000 individuals, and among those individuals are some of the greatest sound designers and music composers in the world. I couldn’t find an exact number as to how many people are involved in their audio production, but among the greatest of their team is Shigeru Miyamoto. Miyamoto’s body of work includes some of Nintendo’s greatest titles, such as: Donkey Kong, Super Mario Brother, Super Mario 64, and many others, and he continues to work to this day.

The Love Game Engine… <3

LÖVE is an *awesome* framework Developed by The Love Team that you can use to make 2D games in Lua. It’s free, open-source, and works on Windows, Mac OS X and Linux. The LÖVE engine is licensed under The zlib/libpng License (which is very short and human readable) which allows you to use the source code and even modify it as long as you do not claim that the original source code is yours. Games such as Mr. Rescue, Move or Die, and RETRO BUG – BLASTING ACTION use the Love engine, and they play very well and smooth. Compared to other engines the love engine seems to be a strong competitor and the fact that it is free makes it even more attractive.

Gears of war

Gears of War is a 2006 military science fiction third-person shooter video game developed by Epic Games and published by Microsoft Game Studios. It is the first installment of the Gears of War series. It was initially released as an exclusive title for the Xbox 360 in November 2006 in North America, Australia, and most of Europe. A Microsoft Windows version of the game was developed in conjunction with People Can Fly and released a year later, featuring new content including additional campaign levels, a new multiplayer game mode, and Games for Windows – Live functionality.



Joust game.

Joust, like many other Atari 2600 games, was born in the arcade. It was also another of the games that had to make compromises to fit on the limited hardware provided. But limitation breeds innovation and some of the tricks they used were downright brilliant.

The game pits the player as a knight with a lance riding a flying ostrich with the goal of knocking the other ostrich-riding knights about, gathering eggs and survival in general. As the waves progress, more enemies attempt to unseat the player. On. And there’s lava. Did we mention that there’s lava?

Sync tanks The Art and Technique of Postproduction Sound

by Elisabeth Weis  (Cineaste, 1995)

The credits for John Ford’s My Darling Clementine (1946) include Wyatt Earp as technical consultant[1] but only one person responsible for all of postproduction sound (the composer). The credits for Lawrence Kasdan’s Wyatt Earp (1994) list the names of thirty-nine people who worked on postproduction sound. The difference is not simply a matter of expanding egos or credits.

An older film like Casablanca has an empty soundtrack compared with what we do today. Tracks are fuller and more of a selling point,” says Michael Kirchberger (What’s Eating Gilbert Grape?, Sleepless in Seattle). “Of course a good track without good characters and storyline won’t be heard by anyone.[2] 

With soundtracks much more dense than in the past, the present generation of moviemakers has seen an exponential growth in the number of people who work on the sound after the film has been shot. What do all those people add both technically and esthetically? “When I started out, there was one sound editor and an assistant,” says picture editor Evan Lottman (The Exorcist, Sophie’s Choice, Presumed Innocent). ” As editor for a big studio picture in the early Seventies I usually cut the ADR [dialog replaced in postproduction–EW] and the music as well.” Today an editor on a major feature would either hire a supervising sound editor who gathers a team of sound specialists, or go to a company like C5, Inc., Sound One, or Skywalker that can supply the staff and/or state-of-the-art facilities.

Sound is traditionally divided into three elements: dialog, music, and effects (any auditory information that isn’t speech or music). Although much of the dialog can be recorded during principal photography, it needs fine tuning later. And almost all other sound is added during postproduction.

How does sound get on pictures? The following is a rough sketch of the procedure for a major Hollywood feature production. But it is not a blueprint; exact procedures vary tremendously with the budget and shooting schedule of the film. Blockbuster action films, for instance, often devote much more time and money to sound effects than is described below. The process certainly does not describe how the average film is made abroad; few other cultures have such a fetish for perfect lip-synching as ours–so even dialog is recorded after the shoot in many countries.

This article can only begin to suggest how digital technologies are affecting post-production sound. For one thing, there is wide variation in types of systems; for another, digital sound techniques are evolving faster than alien creatures in a science fiction movie.


Even the sound recorded live during principal photography is not wedded physically to the image and has to be precisely relinked during postproduction. It is usually recorded on 1/4″ magnetic tape (though there are alternatives) and marked so that it can be ultimately rejoined with the picture in perfect synchronization.

On the set the location recordist (listed as production mixer) tries to record dialog as cleanly and crisply as possible, with little background noise (a high signal-to-noise ratio). A boom operator, usually suspending the microphone above and in front of the person speaking, tries to get it as close as possible without letting the microphone or its shadow enter the frame.

An alternative to a mike suspended from an overhead boom is a hidden lavalier mike on the actor’s chest, which is either connected to the tape recorder via cables or wired to a small radio transmitter also hidden on the actor. But dialog recorded from below the mouth must be adjusted later to match the better sound quality of the boom mike. And radio mikes can pick up stray sounds like gypsy cabs.

While on the set, the sound recordist may also ask for a moment of silence to pick up some “room tone” (the sound of the location when no one is talking), which must be combined with any dialog that is added during postproduction (with reconstructed room reverberation) so that it matches what is shot on the set. (We don’t usually notice the sound of the breeze or a motor hum, but their absence in a Hollywood product would be quite conspicuous.) The set recordist may also capture sounds distinctive to a particular location to give the postproduction crew some sense of local color.

Ben Burtt Wall-E interview.



Ben Burtt Interview, Wall-E

interview by: Sheila Roberts

MoviesOnline sat down with Academy Award-winning sound designer Ben Burtt at the Los Angeles press day for his new film, “WALL-E.”

“WALL-E’s” expressive range of robotic voices was created by Burtt, whose memorable work includes creating the “voices” of other legendary robots, such as R2-D2 from the “Star Wars” films. Drawing on 30 years of experience as one of the industry’s top sound experts, Burtt was involved from the film’s earliest stages in creating an entire world of sound for all of the robotic characters and the spacecraft, as well as all environments.

Burtt is an accomplished filmmaker who has served as film editor on a vast array of projects. He began his work with director George Lucas in 1977 as sound designer of the original “Star Wars,” earning his first Academy Award.

Ben Burtt is a fabulous guy and we really appreciated his time. Here’s what he had to tell us about his latest movie:

Q: Clearly one of the challenges in voicing a character like Wall-E is not to do R2-D2, but there was a little R2-D2 in there, wasn’t there?

BEN BURTT: Well, the challenge of doing robot voices – Andrew (Stanton), if he’d wanted to, I suppose could have hired actors and had them stand in front of a mike and recorded their voices and dubbed that in over character action, but that would have of course not taken the whole idea of the illusion very far. What he wanted was the illusion that these robot characters, the speech and sounds they made were really coming from their functions as machines, that there was either a chip on board that synthesized the voice or the squeak of their motor would sound cute, and that would give an indication how they feel. The problem does go back, for me, to the sort of primal R2-D2 idea, which is how do you have a character not speak words or, in the case of Wall-E, just a very few words, but you understand what is going on in their head and they also seem to have a depth of character.

The trick has always been to somehow balance the human input to the electronic input so you have the human side of it. In this case, for Wall-E, it ended up being my voice because I was always experimenting on myself sort of like the mad scientist in his lab, you inject yourself with the serum. After weeks and months of experimenting it was easier to try it on myself as we worked it out. You start with the human voice input and record words or sounds and then it is taken into a computer and I worked out a unique program which allowed me to deconstruct the sound into its component parts. We all know how pictures are pixels now and you can rearrange pixels to change the picture. You kind of do the same thing with sound.

I could reassemble the Wall-E vocals and perform it with a light pen on a tablet. You could change pitch by moving the pen or the pressure of the pen would sustain or stretch syllables or consonants and you could get an additional level of performance that way, kind of like playing a musical instrument. But that process had artifacts in it, things that made it unlike human speech, glitches you might say, things you might throw away if you were trying to convince someone it was a human voice. That’s what we liked, that electronic alias thing that went along with it, because that helped make the illusion that the sound was coming from a voice box or some kind of circuit depending on the character.

So it is a matter of that relationship, how much electronic, how much human, and you sway back and forth to create the different sounds. A great deal of the sounds, for him as well as the other characters, are also sound effects which are chosen to go with the robot’s character. Wall-E has lots of little motors and squeaks and little clicks of his hands and those are all mechanical sounds that come from many, many different sources. The idea is to orchestrate all those little bits of sound to also be a part of his character so he can cock his head and look at something and you can hear a little funny squeak and in a way it’s an expressive sound effect. So that is important too. It was really that array of sounds that were used to define each character.



Making your film sound great. part 2


Before you begin equalizing your dialogue, soundtrack, or the entire mix, there are a few things to keep in mind about frequency ranges. To avoid a muddy mix, it’s important that each track’s sonic range of frequency is balanced to allow all audio components enough frequency space to breathe. See the image below for a more visual explanation (click for larger view)


The human voice generally sits smack in the middle of the frequency range. Therefore: you can cut the top and bottom of all the dialogue. A low-pass and high-pass filter is generally what you want to use. This eliminates all of the low and high frequencies that are not necessary for the human voice’s frequency spectrum: the low rumble of a your generator that perhaps was too close to the set, a big truck driving by, or even the movement of your boom operator’s fingers on his pole. To eliminate these cut below 100 Hz and above 10 KHz.

Always EQ the dialogue with the entire mix playing so that you’re not soloing the tracks and working in the dark. Otherwise this can create problems with dialogue clarity in the entire mix. Below are a few tips and tweaks when addressing your dialogue (reverse for opposite effect):

  • Male fullness= Boost 120 Hz
  • Female fullness= Boost 240 Hz
  • General Dialogue= Boost 2.5KHz
  • Nasally dialogue= Cut between 2 KHz – 4KHz
  • Male sibilance=  Cut between 4KHz – 6 KHz
  • Female sibilance= Cut between 6KHz – 8KHz
  • Increase vocal presence= Boost 5KHz


Early reflections to the human voice can contribute a great deal of presence and realism that EQ simply cannot recreate. Placing the dialogue in the correct acoustical space is a crucial element to obtaining good sound design and a solid mix. This is especially true for ADR work.

Because of the different types of reverbs and effects, the decisions you make creating your acoustical space will vary drastically from each scene, person, and the placement of your actor in relation to the camera angle. For example, your actor may be speaking directly at the camera, then completely turn his/her back to the camera speaking into the distance. Remember, the camera is the audience’s point of view.

So, how would you best demonstrate this in your mix? Usually by automating the volume, reverb, and low-pass filter to the desired effect. EQ is your friend here too. But again, there are no rules. Generally, you want a far-sounding verb and a near-sounding verb on your reverb busses to obtain the correct atmospheric mix.


We take in a huge variety of noises and sounds in our everyday lives. The sound designer is always taking advantage of these opportunities, consistently thinking outside the box. They are continually scheming and searching for the best way to create engaging soundscapes through experimentation.