JDA ICON 2019 was all about technology, APIs, AI (Artificial Intelligence)
and ML (Machine Learning).
#python
#talk
Exasol on Microsoft Azure – automatic deployment in less than 30 minutes
#exasol
#azure
#pydata
#talk
Apache Parquet is an binary, efficient columnar data format that can be used for high performance data I/O in Pandas and Dask.
#python
#talk
Apache Parquet is an binary, efficient columnar data format that can be used for high performance data I/O in Pandas and Dask.
#python
#talk
Heroku distilled their principles to build modern cloud applications. These principles have influenced many of our design decisions at Blue Yonder to build a data science platform.
#python
#talk
Using Infrastructure-as-Code principles with configuration through machine processable definition files in combination with the adoption of cloud computing provides faster feedback cycles in development/testing and less risk in deployment to production.
#python
#talk
This talk will give an overview on how to deploy web services on the Azure Cloud with different tools like Azure Resource Manager Templates, the Azure SDK for Python and the Azure module for Ansible and present best practices learned while moving a company into the Azure Cloud.
#python
#talk
Apache Spark is a computational engine for large-scale data processing.
PySpark exposes the Spark programming model to Python. It defines an API
for Resilient Distributed Datasets (RDDs) and the DataFrame API.
#python
#pydata
#spark
#talk
This Talk from PyData 2015 Berlin gives an overview of the PySpark Data Frame API.
#python
#pydata
#spark
#talk
When your application grows beyond one machine you need a central space to
log, monitor and analyze what is going on. Logstash and elasticsearch store
your logs in a structured way. Kibana is a web fronted to search and
aggregate your logs.
#python
#elasticsearch
#talk