Autonomous Vehicles (AVs) will soon be a reality on our roads. Up to now, their standard behaviour is predetermined since it is the result of programmed algorithms. Under risky traffic situations, however, they will face dilemmas such as, deciding between affecting the passenger or affecting others. Thus, in this experiment, we investigate how people solve a reframed version of the well-known Trolley-problem [Foot, P. 1967. “The problem of abortion and the doctrine of double effect,” Oxford Review 5: 5–15] under two conditions: when subjects actually drive a vehicle simulator, versus when they solve the same dilemma by programming a hypothetical AV. In both settings, the participants’ decisions have real monetary consequences, which affect others and themselves. Our Probit models indicate that subjects who program an AV and who are more cautious in terms of speed are less (more) likely to sacrifice a pedestrian (themselves) compared to those who actually drive and prefer a higher driving speed. Moreover, we find that the subjects’ choices are associated with risk aversion but not with moral beliefs or loss aversion. Implications of the driving vs programming discrepancies for the design of AVs algorithms are discussed.