April 30, 2024Improve your MLflow experiment, keeping track of historical metrics
This blog was written by Stefano Bosisio What do we need today? Firstly, let’s think of the design of the main SDK protocol. The aim today is to allow data scientists to: add to a given experiment’s run the historical metrics computed in previous runs add custom computed metrics to a specific run Thus, we can think of implementing the two following functions: report_metrics_to_experiment : this function will collect all the metrics from previous experiment’s runs and will group them in an interactive plot, so users can immediately spot issues and understand the overall trend report_custom_metrics : this function returns data scientists’ metrics annotations, posting a dictionary to a given experiment. This may be useful if a data scientist would like to stick to a specific experiment with some metrics on unseen data.