The 35-hour Workweek with Python

35_hour_work_week_header

I was prompted to write this post after reading the NYT’s In France, New Review of 35-Hour Workweek. For those not familiar with the 35-hour workweek, France adopted it in February 2000 with the suppport of then Prime Minister Lionel Jospin and the Minister of Labour Martine Aubry. Simply stated, the goal was to increase quality of life by reducing the work hour per worker ratio by requiring corporations to hire more workers to maintain the same work output as before. This in theory would also reduce the historic 10% unemployment rate.

I mostly write about ML, but I’ve been meaning to write about Pandas’ latest features and tight integration with SciPy such as data imputation and statistical modeling, and the actual working hours of EU countries will serve as fun source of data for my examples. I found data on the annual average hours worked per EU country from 1998 to 2013 on The Economic Observation and Research Center for the Development of the Economy and Enterprise Development website. This notebook won’t serve as in-depth research on the efficacy of this policy, but more of a tutorial on data exploration, although a follow-up post exploring the interactions of commonly tracked economic factors and externalities of the policy might be fun.

In this IPython notebook we’ll work through generating descriptive statistics on our hours worked dataset then work through the process of interpolation to and extrapolation as defined below:

Interpolation is an estimation of a value within two known values in a sequence of values. For this data this might mean replacing missing average hour values in given date positions between the min and max observations.

Extrapolation is an estimation of a value based on extending a known sequence of values or facts beyond the area that is certainly known. Extrapolation is subject to greater uncertainty and a higher risk of producing meaningless data. For this data where the max observed date is 2013, we might want to extrapolate what the data points might be out to 2015.

Click here to view the notebook in it’s own window instead of this iframe.

 

 

Regularized Logistic Regression Intuition

kldavenport_com_regularized_logistic_regressionIn this notebook we’ll manually implement regularized logistic regression in order to facilitate intuition about the algorithm’s underlying math and to demonstrate how regularization can address overfitting or underfitting. We’ll then implement logistic regression in a practical manner utilizing the ubiquitous scikit-learn package. The post assumes the reader is familiar with the concepts of optimization, cross validation, and non-linearity.

Andrew Ng’s excellent Machine Learning class on Coursera is not only an excellent primer on the theoretical underpinnings of ML, but it also introduces it’s students to practical implementations via coding. Unfortunately for data enthusiasts that are pickled in Python or R, the class excercises are in Matlab or Octave. Ng’s class introduces logistic regression with regularization early on which is logical as it’s methods underpin more advanced concepts. If you’re interested, check out Caltech’s more theory heavy Learning From Data course on edX which digs a little deeper in does the same but REALLY dives deep into perceptron.

Using training set X and label set y (data from Ng’s class), we’ll use Logistic Regression to estimate the probability that an observation is of label class 1 or 0. Specifically we’ll predict whether computer microchips will pass a QA test based on two measurements X[0] and X[1].

We’ll generate additional features from the original QA data (feature engineering) in order to create a better feature space to learn from and achieve a better fit. One potential pitfall is that we may overfit the training data which decreases the learned models ability to generalize causing poor performance on new unseen observations. Regularization aims to keep the influence of the new features in check.

View the notebook at nbviewer instead of this iframe.

Dynamic Time-Series Modeling

time-series SMA

Today’s article will showcase a subset of Pandas’ time-series modeling capabilities. I’ll be using financial data to demonstrate the capabilities, however, the functions can be applied to any time-series data (application logs, netflow, bio-metrics, etc). The focus will be on moving or sliding window methods. These dynamic models account for time-dependent changes for any given state in a system whereas steady-state or static models are time-invariant as they naively calculate the system in equilibrium.

In correlation modeling (Pearson, Spearman, or Kendall) we look at the co-movement between the changes in two arrays of data, in this case time-series arrays. A dynamic implementation would include a rolling-correlation that would return a series or array of new data whereas a static implementation would return a single value that represents the correlation “all at once”. This distinction will become clearer with the visualizations below.

Let’s suppose we want to take a look at how SAP and Oracle vary together. One approach is to simply overlay the time-series plots of both the equities. A better method is to utilize a rolling or moving correlation as it can help reveal trends that would otherwise be hard to detect. Let’s take a look below:

View the notebook at nbviewer instead of this iframe.

A Real World Introduction to Information Entropy

pachinko_entropy

I’ve been using IPython notebook so much that it might finally be time to stand up a Pelican based site on this server in order to utilize Jake Vanderplas’ IPython integration method. This post might be my last nbviewer.org iframe crime against proper web design principles.

The intent of this post is to generally explore information entropy applied to a toy problem in network security. I’ll outline a common problem and the basic concepts of entropy then show a practical implementation using the the Kullback-Leibler divergence and the Python data stack.

In network security the latest malware botnet threat paradigm utilizes peer-to-peer (P2P) communication methods and domain generating algorithms (DGAs). This method avoids any single point of failure and evades many countermeasures as the command and control framework is embedded in the botnets themselves instead of the outdated paradigm of relying on external servers.

A potential method of minimizing the impact of these threats is imploying a profiler that detects attributes consistent with DGA and P2P.

View the notebook at nbviewer.

Header image from http://archive.wired.com/magazine/2010/11/pl_decode_pachinko/all/, google “pachinko entropy” for some interesting links.