In any game that is going to be real-time, or contain animations of any kind, you'll want to do some kind of timing to control the speed of the events. If you do not do some kind of timing the speed of the animations or simulation will be dependent on the computer it is running on. The effect of this can be seen in a lot of old DOS games - they do not use timing (timing wasn't exactly easy in DOS) and run way too fast on a modern computer.
To do timing you'll need some kind of timer, a function that returns a 'current time' in a high precision (you'll want at least 1/100 seconds). Modern operating systems provide usable timer functions, and if you want something platform-independent, libraries like SDL and GLFW provide timer functions.
There are two ways to obtain time-independent movement/animation. In the first one you make every 'frame' take a fixed amount of time, and in the second one you use the amount of time that elapsed between this frame and the last frame do determine how much things have 'changed'. Both methods have up and downsides.
Fixed time stepsEdit
With fixed time steps you determine the amount of time you want to pass in between frames and make it a constant. At some point in the handling of every frame you check your timer and if not enough time has elapsed you either idle or do some extra calculations until enough time has elapsed. This has the advantage of making things like physics code simpler. Also if you do this right your simulation can be deterministic - if it gets run with the same parameters the same results will come out. A problem with this approach is how you handle it when the computer is not fast enough to process a whole frame's work during the fixed time step. You can make time steps bigger, but that kind of defeats the purpose of fixed time steps. If you know video is your bottleneck, you can try dropping frames to keep the code on time, but if your code is too slow, the game may appear to freeze. Another possibility is to just run slower on a slow computer.
Variable time stepsEdit
With variable time steps you just keep on running as fast as possible (though you might want to restrict max speed to screen refresh rate), and every frame you check the amount of time that elapsed between the last two frames. This time difference is then used in calculations. For example, in physics code the distance objects move is determined by multiplying their speed by the time difference. An advantage of this method is that on slower computers the game will be a little more 'jumpy,' but still run at full speed. A problem with this is that the simulation behaves strangely if for some reason a frame takes a long time (for example the operating system decides it urgently needs to do something and eats up all processing power for half a second). In this case the time difference will be huge, and unless it is capped off or something objects will 'jump.' It's not hard to filter out such unusual behavior, however, by skipping impossibly large deltas. If you don't, physics code will have to expect this kind of jumping, which makes collision detection harder.
Mixing the twoEdit
I have never tried this myself but some games seem to run their physics at a fixed time step, while updating their graphics and input as often as possible. The state that is drawn to the screen is determined by interpolating the current simulation state to the current time point. This has advantages, it makes the physics deterministic while keeping the game responsive, but is more complex.
I have done this before. It can be more complex at times, but the payoff is quite nice if you want the fastest possible smooth animation. Here's how it works: First, you decide you want to run the physics/game logic code at some fixed rate. Let's say every 40ms. At the start of your game loop, you determine if more than 40ms has passed since you last ran the logic code. If it has, you run your game logic loop. Be sure to check to see if you have to run the loop more than once to catch up. This can happen if the computer suddenly becomes bogged down by some task and, say, 300ms passes before your game gets to run again. An annoying jump to be sure, but if you want to maintain an accurate timing (say, for a simulation), then you'll want to make sure that your game logic code is always run the proper number of times. What this does is allow you to code your game code in a such a way that it is assumed that the same time frame passes each time and it makes it far easier to deal with. To make the animation smooth, though, you need to compute the future location of objects while in the logic code. For instance, it is determined that object A must move from point (10,10) to point (50,10) before the next run of the logic code. In other words, it needs to move from (10,10) to (50,10) in 40ms. Once the logic code is done running, we move onto the rendering code. In the rendering code, you look at the current time and subtract to see how much time you have left until the next logic code run. This gives you a difference to use. Then you examine object A's previous position, check it's destination position, and use the ratio of time left until the next logic run to interpolate where object A should be at right this instant.
Let's say as we go into our rendering loop, we have 12ms until it is time to run the logic code again (which in our example runs every 40ms). So the math is this: (40-12)/40. This gives us a delta of 0.7. So that means we are 70% of the way to our destination. So now we subtract our previous X value (10) from our destination X value (50) for object A and multiply it by the delta to get our exact X position at this moment in time: ((50-10)*0.7)+10 = 38. So our current X position should be 38. So we draw object A at (38,10) for this frame. Now we get to the next frame and we still have 4ms until the game logic code should be run. So we don't run the logic code yet and skip straight to the rendering code. This time the formulas would give the following results: delta = (40-4)/40 = 0.9; ((50-10)*0.9)+10 = 46; Object A is at (46,10).