I would be very grateful if someone could explain to me why it is, when we multiply decimals, following the standard procedure (which has us ignore the decimal places), we know in advance to restore the decimal place in the answer in exactly the same number of spaces as there are decimal places.
Similarly, why do we know the decimal place in the division algorithm, when dividing decimal numbers, is the correct one? i.e. when we move all decimal places to the right to make the divisor a whole number, why/how does that work in the algorithm?
To clarify, I know that the calculations are correct, and understand why, but what I don't understand is the particular layout of long division and multiplication on the subject of the decimal places. e.g. I see why moving the decimal place in the division problem amounts to simply multiplying the figures so we can better handle them, and that the answer, therefore, is the same whether we minimize or enlarge the number by several factors. What I can't grasp is why we know to place the decimal place in exactly the same place in the quotient sitting above the dividend as where it lies in the dividend.
Apologies if this makes little sense; i'll do better to clarify if prompted.
Welcome to the forum.
Convert the calculation into fractions (over /10 or /100 or /1000 etc) and you'll see why it works.
eg. 2.3 x 400.45
= 23/10 x 40045/100 = (23 x 40045)/1000 = 921035/1000 = 921.035
eg. 4.5 ÷ 0.15
= 45/10 ÷ 15/100 = 45/10 x 100/15 = 4500/150 = 450/15 = 30
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei