I'm Pietro Franceschi, and I work at
the Computational Biology Unit and the Research and Innovation Center,
the Fondazione E. Mach, in Italy.
This talk is centered on the processes of metabolomics data,
in particular, coming from mass spectrometry in untargeted experiments.
The talk is more or less organized as follows,
with general introduction about data analysis in
metabolomics or in generinomic sciences and which are the major general,
and fundamental issues that have to be faced to analyze this type of data.
While in the second part, I will talk a little bit more in detail
on how actually the processing of the data is performed.
As a general point,
when you work with omics data in omic sciences,
the question, an interesting question is to understand which is
the role of bioinformatics statistics in chemometrics.
The idea is that you have this huge amount of data, and you would like to analyze
them with the focus or the objective of promoting the incremental progress of science.
In the meantime, you want to guarantee the validity and the correctness of the results.
In this sense, you would like also to be able to produce scientific results.
And when I say scientific,
I mean something that is not only true for the data you are analyzing,
but is true in the general case.
That is somehow quite
understandable when you think of basic science like physics or chemistry,
and this can be more tricky when you think of biology or medicine.
The thing you would like to do to use these tools coming from bioinformatics,
is to be consistent and also to get the maximum from complex data.
The problem there is that this data are so much complex that is
not possible simply to browse them by hand and try to figure out what's happening.
To sum up, the idea is that you use bioinformatics, statistics,
or chemometrics to go from data to knowledge,
and you'd like this knowledge to be valid in general.
We are living in a society where