Johannes KnoppArtificial Intelligence, Deep Learning, Data Science, Infrastructure, Machine Learning, Data Engineering
10 years ago we built a classifier for categorizing product data. Let's take a journey through the lessons we learned over the years about building, maintaining, and modernizing the category classifier.
Vasily KorfArtificial Intelligence, Code-Review, IDEs/ Jupyter, Python
Datalore supports intentions – code suggestions based on what you’ve just written.
Florian WilhelmArtificial Intelligence, Deep Learning, Data Science, Machine Learning, Science
Are you sure about that?! Uncertainty Quantification in AI helps you to decide if you can trust a prediction or rather not.
Thorben JensenArtificial Intelligence, Algorithms, Data Science, Machine Learning, Data Engineering
How to automate the labor-intensive task of feature engineering for Machine Learning? This talk gives an overview on methods, presents open-source libraries for Python, and compares their performance.
Felicia BurtscherArtificial Intelligence, Algorithms, Deep Learning, Data Science, Networks, Machine Learning, Science
#julia_introduction. why julia is better than python. machine learning made eady with juliabox.
Harald BoschArtificial Intelligence, Computer Vision, Deep Learning, IDEs/ Jupyter, Machine Learning
Build a ML showcase using #transferlearning, #keras, #WebRTC, #python
Adrin JalaliArtificial Intelligence, Community, Code-Review, Machine Learning
an update on recent scikit-learn changes, current affairs, and the roadmap
Alexander CS HendorfArtificial Intelligence, Business & Start-Ups, Data Science, Machine Learning, Use Cases
Artificial Intelligence need to be better understood in enterprises. Close the communications gap between engineers and management. Making data litteracy happen in your organisation.
Peter Kairouz, Amlan ChakrabortyArtificial Intelligence, Deep Learning, Data Science, Machine Learning, Data Engineering
Meet TensorFlow Federated: an open-source framework for machine learning and other computations on decentralized data.
Valerio MaggioArtificial Intelligence, Deep Learning, Machine Learning, Science
This tutorial provides a general introduction to the PyTorch Deep Learning framework with specific focus on Deep Learning applications for Precision Medicine and Computational Biology.
Marysia WinkelsArtificial Intelligence, Algorithms, Computer Vision, Deep Learning, Data Science, Machine Learning, Science
Equivariance in CNNs: how generalising the weight-sharing property increases data-efficiency
Vincent WarmerdamArtificial Intelligence, Algorithms, Data Science, IDEs/ Jupyter, Machine Learning, Statistics
gaussian progress. it's meta, but also the most normal conference title this year!
Tilman KrokotschArtificial Intelligence, Deep Learning, Data Science, Machine Learning
PyTorch makes developing, training and debugging deep neural networks convenient. Learn how to export your trained model using its just-in-time (JIT) compiler to hide your network architecture, minimize code dependencies and use it in the C++ API. It's getting faster, too!
Dr. Benjamin WerthmannArtificial Intelligence, Big Data, Machine Learning
Find out and discuss how law and ethics should be included in a framework for machine learning that protects creativity and effectiveness
David WölfleArtificial Intelligence, Algorithms, Deep Learning, Data Science, Machine Learning, Statistics
This talk covers the theoretical background behind two common loss functions, mean squared error and cross entropy, including why they are used for machine learning at all, and what limitations you should keep in mind.
Sarah Diot-GirardArtificial Intelligence, Data Science, Natural Language Processing, Machine Learning
Data privacy can be tricky when doing Natural Language Processing, join us to explore the different strategies you can use to keep your user data safer!
Chiin-Rui Tan, Dare Imam-LawalArtificial Intelligence, Algorithms, Data Science, Machine Learning, Web, Data Mining / Scraping, Use Cases
Socio-Technical Python for OSINT! The old state discipline of gathering intelligence from open sources is today critical for investigating Disinformation but has lacked modernisation. A former UK Gov Head of DataSci presents a maturity model for updating legacy OSINT with Python!
Peggy Sylopp, Aislyn RoseArtificial Intelligence, Algorithms, Computer Vision, Deep Learning, Data Science, Machine Learning, Science
Control what you hear with deep learning and open audio databases. The developer and manager of \\NoIze//, a project supported by Prototype Fund, share what’s helped them build an open source smart, low-computational noise filter in Python.
Katharina RaschArtificial Intelligence, Data Science, DevOps, Infrastructure
There is now a wealth of tools that support data science best practices (e.g. tracking experiments, versioning data). Let’s take a look at which tools are available and which ones might be right for your project.
Avaré StewartArtificial Intelligence, Data Science, Natural Language Processing, Machine Learning, Data Engineering
Unleash Intelligence in you Data Transform a Legacy System into Bias-Mitigating AI Solution for Debt Repayment with Tesseract, SpaCy, & AI Fairness 360
Irina Vidal MigallónArtificial Intelligence, Computer Vision, Deep Learning, Machine Learning
How much time & risk do you have? Ways to robustify your vision NN model before you let it go live.
Alessia MarcoliniArtificial Intelligence, Data Science, Machine Learning
Versioning in Data Science projects can be pretty painful: are you able to track the data sets along with the code itself and some of the resulting models?
Yurii TolochkoArtificial Intelligence, Algorithms, Deep Learning, Machine Learning, Statistics
Why doesn’t RL show the same success as (un)supervised learning? Inherent difficulties facing RL and avenues for future work
Marianne StecklinaArtificial Intelligence, Deep Learning, Data Science, Natural Language Processing, Machine Learning, Science
Language models like BERT can capture general language knowledge and transfer it to new data and tasks. However, applying a pre-trained BERT to non-English text has limitations. Is training from scratch a good (and feasible) way to overcome them?