Question from advanced computer architecture course: Consider the following piece of RISC-V code : LD R8, 0(R9) ; R8 is loaded 2 from the memory loop : FADD.D F0, F2, F4 ;F2 and F4 are already initialized FDIV.D F6, F8, F10 ; F8 and F10 are already initialized FMUL.D F2, F14, F0 ; F14 is already initialized ADDI R8, R8, #(-1)10 FSUB.D F8, F2, F6 BNE R8, R0, loop FS.D F8, 0(R10) ; R10 is already initialized The RISC-V is implemented as the scalar hardware-speculative Tomasulo algorithm machine discussed in class. One (1) instruction can be committed per cycle. Assume that the functional unit timings are as listed on C-77 of the Hennessy book : FADD.D, FMUL.D, FDIV.D, FLD, and FSD take 3, 11, 41, 2, and 1 clock periods, respectively ; the number of reservation station buffers for FP operations is as given as two reservation stations for either FSUB/FADD instructions and two reservation stations for either FMUL and FDIV (together). The reservation station is free when the instruction finishes the execution; there is a Branch Unit in the EX stage for calculating its effective address and determining the condition ; there is also additional branch prediction hardware in and out of the pipeline; The branch prediction is taken. Only when the branch instruction commits instructions maybe flushed in case of wrong prediction. the there is no branch delay slot. There are enough functional units for integer instructions not to cause stalls ; the L1 cache memory takes one clock period each and there are no cache misses. There is one fetch unit to fetch one instruction at a time and one decode unit to decode one instruction at a time. 1)Show the execution of the code? 2)what is the last clock period in which the Commit stage of the last non-speculative instruction is done?