# A use case example

I was working on a project in which not only the input data was time-stamped, but moreover, the density of the timestamps greatly affected the computation delay in terms of clock cycles. The project was performance critical and careful models of performance were needed. A very careful generation of the time stamps according to an appropriate model was necessary for the simulations to give us the results we needed, and to finally verify our performance models.

Using a number of simplifications, I in a **first step** derived the **analytical expressions** for the throughput and its dependence on two main design parameters; one design parameter chosen by us, and one parameter being a consequence of unused clock flanks in a state machine. These expressions could be used to sweep the parameters and plot the results, giving important insights on how to increase our critical throughput. They were also used to study the relation between the density of the input time stamps and the throughput. The plot from this analysis is shown below.

In the **second step**, I used a **Matlab model** of the full design, developed by me for other reasons earlier on. That model could verify that my analytical expressions were correct so far, and that model also included some parameters omitted in the analytical model. The input data was generated with appropriate time stamps using the methods described in this post.

In the **third step**, I simulated the entire **HDL design** to study the throughput, still having generated the time stamps adequately. Both the correctness of the analytical model and the Matlab model could be verified. Here, I was also able to change any detail in the design to study its effect on the throughput. I would not have been able to trust such results had I not known that I had a fully realistic model of the time stamps.

In a **fourth step**, I performed a **Hardware-in-the-Loop test**, where my test data was sent to an FPGA hosting the design, rather than to a simulation.

Finally, I could also perform the **third and fourth step** not with data/time stamps generated by me, but with input data generated from a colleague, **using models for the actual physical processes that gave rise to the input data**. Those models used no assumptions on anything being described by a "Poisson Process" - those models modeled the actual physical process, which in turn *give rise to* Poisson/Exponentially distributed data. Using this input data I could verify that all my models on the throughput still were valid, and hence I had verified that also my modelling of the time stamps as coming from a Poisson Process was correct.

# Suggested further reading

As often, Wikipedia is the best source of more information:

Poisson Processes

The Exponential Distribution

The Poisson Distribution

The normal distribution also appears for some quantity of the poiss. process?

It is familiar yes... it appears in some derived quantity, just can't remember which.