BuzzConf
ufo
City

BuzzConf
2020

A conference for developers, by developers.
Functional programming, Distributed systems, Embedded systems, Programming languages, Probabilistic programming, Big data, Profiling & debugging, Artificial intelligence, Machine learning & Deep learning.

27 JULY - 31 JULY RESERVE YOUR SPOT FOR FREE
Previous editions: BuzzConf 2019

Keynote speakers

Charity Majors

Charity Majors

@mipsytipsy

Operations and database engineer, and founder and CTO of honeycomb.io, which builds observability for distributed systems. Co-author of “Database Reliability Engineering” by O’Reilly.

Talk The Sociotechnical Path to High-Performing Teams (Begins With Observability)

Viral B. Shah

Viral B. Shah

@Viral_B_Shah

One of the creators of the Julia programming language, co-founder and CEO of Julia Computing an co-author of the book "Rebooting India".

Talk Julia - A language for AI and much more

Peter Alvaro

Peter Alvaro

@palvaro

Assistant professor and researcher at the University of California, specializing in the intersection of databases, distributed systems and programming languages.

Talk What not where: why a blue sky OS?

Aditya Siram

Aditya Siram

@deech

Scala developer by day, but writes Haskell, Shen, C, Rust and ATS by candlelight. His talk, titled "Nim nuggets", will be about the Nim programming language.

Talk What FP Can Learn From Static Introspection

Will Kurt

Will Kurt

@willkurt

Author of “Bayesian Statistics the Fun Way” and “Get Programming with Haskell”. He is currently the lead Data Scientist for the pricing and recommendations team at Hopper.

Talk The Limits of Probability

Chris Rackauckas

Chris Rackauckas

@ChrisRackauckas

Using Julia, Chris researches Scientific Machine Learning, focusing on how the randomness from scientific data can be used to uncover the underlying mechanistic structure. He is the lead developer of DifferentialEquations.jl and pumas.ai.

Talk How full language differentiability enables scientific machine learning and Scientific Software 2.0

Pablo Fernández

Pablo Fernandez

@fernandezpablo

Pablo has been shipping backend and frontend code in about a dozen languages for about 15 years professionally. He lately has been working on machine learning models, on both large and small scale.

Talk Machine learning In The Real World

Vanina Martínez

María Vanina Martínez

Vanina is a PhD graduate in Computer Science from the University of Maryland College Park. Her research interests include reasoning under uncertainty, inconsistency management in relational databases and knowledge bases, defeasible reasoning, and argumentation.

Talk Symbolic Reasoning to model Sentiment and Knowledge Diffusion in Social Networks

Sergio Chouhy

Sergio Chouhy

Sergio is a PhD in Mathematics from University of Buenos Aires and University of Montpellier who completed his posdoc in pure mathematics at the University of Stuttgart. He currently works in Data Science & Operation Research at Eryx.

Talk Implementando Deep Q Learning con Pytorch

Location

This time we're going online for the first time! Join us as we livestream two hours of talks per day, for 5 consecutive days. Free of charge!

Talks

ufo
JULY 27th

The gulf between elite and high-performing teams and the bottom 50% of teams is bigger than you might think -- and growing. Yet for all the time we spend improving our skills as engineers, we pay far less attention to measuring and improving our effectiveness at the team level. Let's talk through the shifting model for software ownership, what it means for ambitious teams, and why observability is step one to a better world.

Social media platforms, taken in conjunction, can be seen as complex networks; in this context, understanding how agents react to sentiments expressed by their connections is of great interest. We show how Network Knowledge Bases help represent the integration of multiple social networks, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information - this is a first step toward developing symbolic tools to predict how agents behave as information flows in their social environment.

JULY 28th

Probability is an increasingly ubiquitous part of our daily lives, especially as developers, researchers and data scientists. It is easy to mistakenly think this powerful tool is all we need to understand our world. This talk will show how our current environment of global pandemic, political unrest and economic uncertainty forces us to face the limits of probability as a tool for reasoning and understanding. This talk will cover both practical examples of the limitations of probability as well as dive into the philosophical roots of these limitations to show that it cannot be our only means to engage with our world.

Juan Pablo Lorenzo: "Delete your code: in search of a minimalist approach to software development"

Gajendra Deshpande: "Computation Techniques for Encrypted Data using Python"

JULY 29th

What if compile time and type level programming in functional programming languages were easy, something you reach for without even thinking about it? What if you could debug type errors with a simple compile time print statement? Write highly flexible systems by being able to introspect into types at compile time? Pre-calculate large portions of your programs for great efficiency? Typed functional programming is a great and fun way to write resilient software, and as type systems have become more and more expressive in recent years, we are able to program sophisticated and useful properties at the type level for even better compile time safety. Just one problem: It is very difficult, requires advanced knowledge of the type system, the syntax is convoluted, the error messages are impenetrable, and it is nearly impossible to debug. This talk will dive into why we should steal static introspection from languages like Nim, and D, state-of-the-art imperative programming languages which can solve all these issues, make type systems much more approachable without losing any expressive power, and offer new design possibilities for functional programs.

Many things are being said about Deep Reinforcement Learning, but sometimes it is really hard to know where to start. In this talk, I will tell you all about the basis of this algorithms and show you how to deploy Deep Q Learning from scratch using Pytorch. I will be also talking about applications of this technology in the industry.

JULY 30th

The Julia language is now used by over half a million programmers worldwide. Created to solve the two language problem, Julia is demonstrating performance gains of 50x-100x for many data science tasks such as data loading, data processing, graph processing, machine learning and scaling. Robust support for modern deep learning and the ability to do differentiable programming in an intuitive way is quickly leading to Julia becoming the language of choice for AI workloads. My talk will discuss the origin story of Julia, the formation of the Julia community, and all the amazing things happening in the world of Julia.

Scientific machine learning is a burgeoning field and its taking off in Julia. Why? The purpose of this talk is to dive into that question: how has language accelerated the development of Julia's SciML ecosystem? The core is composibility through multiple dispatch. We will showcase how this feature is not only what makes standard Julia code as fast as C or Fortran, but also allows Julia to eschew the traditional idea of "machine learning frameworks" and instead have machine learning directly work on the standard functions and libraries of the whole Julia programming language. This language-wide differentiable programming then builds a foundation where existing climate models, helicopter simulations, and efficiency simulators for battery-powered airplanes can be instantly composed with new tools for machine learning, and we will demonstrate how this has changed the way that researchers in Julia do science.

JULY 31th

A tour of the last 3 years of my career where I’ve productionized 3 different machine learning projects on kind-of-a-big-company (Despegar). Some of the challenges faced, not only technical but also from a product standpoint, some of the pedagogical work needed to convince others of letting important decisions be made by a machine. Hopefully insights that help you bring your own models to production.

A world of distributed, persistent memory is on its way. Our programming models traditionally operate on short-lived data representations tied to ephemeral contexts such as processes or computers. In the limit, however, data lifetime is infinite compared to these transient actors. We discuss the implications for programming models raised by a world of large and potentially persistent distributed memories, including the need for explicit, context-free, invariant data references. We present a novel operating system that uses wisdom from both storage and distributed systems to center the programming model around data as the primary citizen, and reflect on the transformative potential of this change for infrastructure and applications of the future.

Code of Conduct

We want the BuzzConf to be a place for all people to celebrate knowledge and community without discrimination. Therefore, all speakers, attendees and staff of the BuzzConf must read and agree to our Code of Conduct as a requisite to participate.

READ IT HERE
Our Sponsors

Gold Sponsors

LambdaClass

Startup Sponsors

Previous editions