#TALK

JDA ICON - Enabler of AI - Overview of an AI Architecture

JDA ICON 2019 was all about technology, APIs, AI (Artificial Intelligence) and ML (Machine Learning).
#python #talk

Peter Hoffmann Peter Hoffmann

Exasol User Group Karlsruhe

Exasol on Microsoft Azure – automatic deployment in less than 30 minutes
#exasol #azure #pydata #talk

Peter Hoffmann Peter Hoffmann

EuroSciPy 2018 - Apache Parquet as a columnar storage for large datasets

Apache Parquet is an binary, efficient columnar data format that can be used for high performance data I/O in Pandas and Dask.
#python #talk

Peter Hoffmann Peter Hoffmann

Europython 2018 - Using Pandas and Dask to work with large columnar datasets in Apache Parquet

Apache Parquet is an binary, efficient columnar data format that can be used for high performance data I/O in Pandas and Dask.
#python #talk

Peter Hoffmann Peter Hoffmann

Swiss Python Summit 2018 - 12 Factor Apps for Data-Science with Python

Heroku distilled their principles to build modern cloud applications. These principles have influenced many of our design decisions at Blue Yonder to build a data science platform.
#python #talk

Peter Hoffmann Peter Hoffmann

EuroPython 2017 - Infrastructure as Python Code, Run your Services on Microsoft Azure

Using Infrastructure-as-Code principles with configuration through machine processable definition files in combination with the adoption of cloud computing provides faster feedback cycles in development/testing and less risk in deployment to production.
#python #talk

Peter Hoffmann Peter Hoffmann

PyConWeb 2017 Munich - Deploying your Web Services on Microsoft Azure

This talk will give an overview on how to deploy web services on the Azure Cloud with different tools like Azure Resource Manager Templates, the Azure SDK for Python and the Azure module for Ansible and present best practices learned while moving a company into the Azure Cloud.
#python #talk

Peter Hoffmann Peter Hoffmann

EuroPython 2015 PySpark - Data Processing in Python on top of Apache Spark

Apache Spark is a computational engine for large-scale data processing. PySpark exposes the Spark programming model to Python. It defines an API for Resilient Distributed Datasets (RDDs) and the DataFrame API.
#python #pydata #spark #talk

Peter Hoffmann Peter Hoffmann

PyData 2015 Berlin - Introduction to the PySpark DataFrame API

This Talk from PyData 2015 Berlin gives an overview of the PySpark Data Frame API.
#python #pydata #spark #talk

Peter Hoffmann Peter Hoffmann

EuroPython 2014 - Log everything with Logstash and Elasticsearch

When your application grows beyond one machine you need a central space to log, monitor and analyze what is going on. Logstash and elasticsearch store your logs in a structured way. Kibana is a web fronted to search and aggregate your logs.
#python #elasticsearch #talk

Peter Hoffmann Peter Hoffmann