How to fight against big (and complex) data set Neural network vs. conventional methods


The community of atmospheric satellite observations is facing more and more challenges due to the non-stop increasing number of observations. The OMI air quality space-borne has already delivered ~542 million spectra per year since 2004. And its successor TROPOMI (to be launched later this year) is expected to increase this number by a factor of 20.

The most conventional atmospheric retrieval methods try to compete between high accuracy for each single observation and fast processing time. But usually, there is always a cost somewhere.

Since our recent and novel aerosol layer height retrieval algorithm developed with success for the OMI satellite instrument (more details here), we are trying to address the issue of big and complex data set with machine learning approach in general, and neural network techniques more specifically.

I could address my specific and subjective thoughts about this subject during the Second Post-gradual Teaching Workshop of the Buys Ballot Research School, at KNMI, on 2017.03.24.

Slides available here.

I specifically thank Dr. Sybren Drijfhou for the invitation & the organisation, Dr. Tim Vlemmix, Dr. Pieternel Levelt, Dr. Pepijn Veefkind and all my GRS & KNMI colleagues for the diverse and inspiring discussions the last months. They motivated this talk. And finally, I gratefully acknowledge Dr. Maarten Sneep, Dr. Jacob van Peet (KNMI), Dr. Folkert Boersma (KNMI / WU) and Dr. Antonio di Noia (SRON) for their valuable inputs to my survey (cf. Survey Variational vs. statistical approaches to atmospheric parameter retrieval).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: