Monthly Archives: June 2013

SciPy 2013: Day Four/Five

Day Four: Talking, myself and so many others talking…

I’ve been to several poster sessions since going back to school two summers ago. However, this was the first session that I presented in. My poster was for Scholarly. Scholarly, is a personal research project involving the network of scholarly citations that started off as a flop and has grown into a little monster. Feel free to read more about it on my current projects page, visit the GitHub repository, or download the poster. It was nice to see so many people interested in the project. Almost everyone instantly realized the importance this dataset will have and many asked the question, when can we access the data? Unfortunately, I don’t have a solid answer to that question — yet. But, we are making progress and our data collection servers should be on the web soon! Now, on to what other people spoke about.

Thursday was well, more talks. By the end of the day I was quite exhausted. While, many of them were incredibly interesting I guess I’ve discovered my limit for sitting and listening to people while I clack away at a computer — three days.

One talk I found of particular interest was Analyzing IBM Watson experiments with IPython Notebook by Torsten Bittner from IBM. I won’t go into all of the details but, the Watson team was able to take ~8,350 SLOC written in Java used for training and testing Watson to ~220 SLOC in an IPython notebook. And to make this even more impressive, it ran faster in the notebook. And with that, I’ll move into day five.

Day Five: Let the sprints commence!

Sprints? At a software conference? If you’re picturing 100 sweaty programmers running down hallways for fresh coffee you’re a bit off. There is coffee and occasionally sweat. But, very, very little running.

The sprints are an opportunity for open source projects to get people involved regardless of their skill level. It’s a pretty cool experience. You’re surrounded by programmers from all backgrounds, some legendary, most you’ve never heard of before, but all of whom are very approachable and patient.

After attending the morning pre-sprint session all of the projects distributed themselves in various rooms and went to work. I decided to try something a bit different today.

With all of this ranting and raving I’ve been doing about IPython notebooks I decided I needed to see what I could do with it. Here is the result. I was able to successfully integrate the notebook while benchmarking some search queries in a MongoDB instance populated with fake citation data. It wasn’t cumbersome to do so and documenting as I went along was actually quite enjoyable.

Time for a beer.

-H.

Tagged , , ,

SciPy 2013: Day Three

Opening Remarks:

With tutorials concluding yesterday, today began the talks. This is the largest SciPy to date. The registration increase of ~75% over last year was easily noticeable at the opening remarks as I sat in a packed room of fellow coders and scientists.  Co-hosts Andy and John announced that the primary themes for this years conference are reproducibility and machine learning before introducing the keynote speaker Fernando Perez.

Keynote: Fernando Perez of IPython — IPython: from the shell to a book with a single tool – The method behind the madness

As you would expect from Fernando, the talk was fast paced, informative, and enjoyable. The real icing on the cake however, was the delivery — an IPython Notebook slide show. I’ve already gone into who excited I am about the IPython notebook; I think it will make an awesome medium for teaching CS students. The slide show only further enhances the usefulness of the IPython tool set. -steps down from soapbox- Anyway, I’ve tried to summarize points from Fernando’s talk I found to be of interest.

After a brief set of opening remarks about the amazing spirit of the community and the phases of the research life cycle, Fernando explained some of IPython’s major mile stones:

  • 2001 – First version of IPython (it was only 259 lines of code!) It’s primary goal was to provide a better interactive Python shell.
  • 2004 – Interactive plotting with matplotlib.
  • 2005 – Interactive parallel computing.
  • 2007 – IPython embedding embedding in Wx apps.
  • 2010 – An improved shell and a protocol to go along with it.
  • 2010 – After 5 attempts, a sixth leads to what we now know as IPython notebook.
  • 2010 – Sharing notebooks with zero-install via nbviewer
  • 2012 – Reproducible research with IPython.parallel and StarCluster
  • 2012 – IPython notebook-based technical blogging
  • 2013 – The first White House hackathon (IPython and NetworkX go to DC)
  • 2013 – IPython notebook-based books: “Literate Computing” Probabilistic Programming and Bayesian Methods for Hackers.

Continuing, Fernando explained many of the lessons he’s learned since starting the project, highlighted alternative use cases written around IPython and IPython notebooks, thanked the community, and gave us some ideas into what lays ahead for IPython (1.0 in a few weeks!). Once the video becomes available, I will be sure to add it here and I highly recommend you watch it!

“The purpose of computing is insight, not numbers” — Hamming ’62

-H.

Tagged , , ,

SciPy 2013: Day Two

Tutorial Three: An Introduction to scikit-learn (I) – Gaël Varoquaux, Jake Vanderplas, Olivier Grisel

For a long time I’ve been very curious about machine learning. Up to this point it’s appeared to me much like a mystical unicorn. They seem really cool but you never really know much about them. This tutorial provided me with an excellent chance to break that mysticism down.

After a brief introduction to scikit-learn and a refresher on numpy/matplotlib we used IPython notebooks to walk through basic examples of what the suite is capable of. We then moved into a quick overview of what machine learning is and some common tactics for tackling data analysis. Now that we were a bit more familiar with the suite itself and machine learning principles, we moved onto more complex examples.

Again, using IPython notebooks we walked through examples of supervised learning (classification and regression), unsupervised learning (clustering and dimensionality reduction), and using PCA for data visualization. We ended the morning session with a couple of more advanced supervised learning examples (determining numbers of hand written digits and Boston house prices based on various factors) and an advanced unsupervised learning example in which we analyzed over 20,000 text articles to determine from which of four categories they likely originated.

One note for further research: How much data should be used for train vs test data? What factors play a role in this and are there any common standards or practices which researchers follow?

Tutorial Four: Statistical Data Analysis in Python – Christopher Fonnesbeck

Statistics is an area of for me. Combine that interest with Python Pandas and you’ve got an instant winner, right? Not exactly. While the talk was tagged for beginners it proved to be otherwise.

The speaker clearly had a very strong background in statistics. However, those that background didn’t transition into an easy to follow talk. The statistics language was very far above me and most of the room — if my observations were correct. Additionally, the version of pandas he used wasn’t the same as the version in the required packages noted in the talks description. This resulted in the majority of us not being able to follow along in IPython notebooks and being forced to watch him on the projector.

Please, don’t mistake this for a whine session. Chris knew his stuff and he was able to answer everyones’ questions and smashed some ‘stump the chump’ attempts without batting an eye. But, the talk should have been refined and rehearsed and versions of required packages should have been vetted earlier.

You can’t win them all, right?

-H.

Tagged , , , , ,

SciPy 2013: Day One

Registration:

This year I am fortunate enough to be able to attend SciPy. SciPy is a Python conference focused on scientific programming. A big shout out to the Center for Open Science for making this trip a possibility. This will be the first of a (hopefully) daily blog series in which I will briefly cover how my day went and any lasting impressions it left me with.

The conference organizers made checking in  quick and painless. We were served a breakfast buffet that was surprisingly good. Of paticular interest to me were the scrambed egg mini-bagels and lemon poppyseed bread slices — yum. Breakfast as followed by a series of tutorials which registrants chose in advance.

Tutorial One: Guide to Symbolic computing with SymPy — Ondřej Certik, Mateusz Paprocki, Aaron Meurer

The SymPy tutorial was … interesting. We simulataneously listened to the lecturer discuss and demonstrate common SymPy functions while completing examples in various IPython notebooks that they provided us with. It was fast paced, too fast for me anyway, and we had to skip over a lot of material.

The tutorial docs recommended experience with IPython and ‘basic mathematics.’ However, I was quite surprised how far my definition of basic mathematics was from theirs. Unfortunately, this left me struggling to keep up with the tutorial even from early on. After a mid-session break, we briefly covered Calculus functionality before being introduced to some real world applications. This is where I became utterly lost.

SymPy’s original lead developer showed us several examples of his use of SymPy while preparing his dissertation in Chemical Physics. These included Poisson Equations, Fast Fourier Transform(s) (FFTs), and Variation with Lagrange multipliers. Don’t know what some (any) of those are? It’s okay, neither do I. On a side note, they did show an interesting ‘hack’ using javascript injectin in an iPython notebook which allowed them to manipulate 3D figures.

While the tutorial itself felt a bit unpolished, the instructors knew there stuff. All in all SymPy seems like a really interesting tool which I plan to use. When combined with IPython notebooks I believe it could create very powerful, long lasting notes for a variety of math intensive classes. I’ll be testing this out next semester in Physics.

Tutorial Two: IPython in depth — Fernando Perez, Brian Granger

Anyone who has listened to either Fernando or Brian could have told you that his tutorial was going to be good. It was. They provided a solid tutorial environment with IPython notebooks that kept me feeling like I was actively working with them throughout the entire tutorial. Whenever anyone had a question they knew how to answer them quickly and concisely.

A few things I found of paticular interest:

  1. IPython Notebook: If haven’t heard of this, click the link and check out. I’m not kidding. This is a versatile web tool that is incredibly powerful. For some cool examples of what people have done with notebook (including writing a book!) click here.
  2. Awesome help functionality: With IPython’s built in help functionality  [ ?, ??,  %quickref, and %magic ] you can quickly get a syntax highlighted help description, the source for a module, or even access a nifty quick reference guide mostly eliminating the need  to pop out of a notebook or console and visit online docs.
  3. Kickass debugger: IPython’s shell is amazing. But, I’ve found myself using PyCharm for more advanced bit of code while debugging. After learning about IPython’s magic %debug and %run -d theprogram.py that may have changed. They provide you with very powerful and easy to use debugging abilities I wasn’t even aware existed.

Day one down. Time for sleep.

-H.

Tagged , , , , ,

Two years, too quickly

Today marks two years from my end of service date from the Army. The time has gone by too quickly and a lot has happened  which I will leave out of this post for the sake of brevity. I would however, like to to take thank all of the great leaders and Soldiers who helped shape the individual I am today and apologize to any of my Soldiers I may have let down or whose standards I failed to live up too — I will remember you all for as long as I live.

“It doesn’t matter what you do. If you’re a broom sweeper, be the best damned broom sweeper you can be. Everything else will fall into place from there.” – 1SG Pendleton

-H.

Tagged , ,

Heroku + Flask: Simplicity

After playing around with Amazon’s free tier EC2/EBS instances, I found myself becoming frustrated with the amount of work it took to get something simple up and running. Take this with a grain of salt; I am by no means a command line master or an experienced dev ops guy. Shortly after mentioning this to a co-worker, I was directed to Heroku.

It was a dream. Heroku takes care of almost all of the dev ops stuff that can make life a nightmare. Following their getting started with Python documentation was incredibly straightforward. While their pricing seems like it can be a bit high, they have certainly provided a a well refined product to a market of developers who either do not have the interest in configuring environments in the cloud or don’t have the time to dedicate to its maintenance. Well done, Heroku.

Heroku and Flask Setup in Ubuntu 13.04:

This is a brief walk through of setting up and launching a simple Heroku environment and Flask server. It follows closely with the official Heroku Python docs with a couple of my personal notes. Enjoy!

  1. Sign up for Heroku.
  2. Download the the Heroku toolbet:
  3. $ sudo su
    $ wget -qO- https://toolbelt.heroku.com/install-ubuntu.sh | sh

  4. Login to Heroku: If prompted to generate a key enter yes.
  5. $ heroku login

  6. Make a directory to hold your app, create a virtual environment, and activate it.
  7. $ mkdir /path/to/your/app
    $ cd /path/to/your/app
    $ virtualenv venv –distribute
    $ source venv/bin/activate

  8. Install Flask, our framework, and gunicorn, our web server.
  9. $ pip install Flask gunicorn

  10. Now, create a very simple hello world application.
  11. $ vi hello.py

    import os
    from flask import Flask

    app = Flask(__name__)
    @app.route(‘/’)

    def hello():
    return ‘Hello World!’

  12. Next, we need to create a Procfile which will to state which command should be called when the web dyno starts; Gunicorn in our case.
  13. $ vi Procfile

    web: gunicorn hellp:app

  14. Finally, start the process using Foreman from the command line and verify it’s up and running.
  15. $ foreman start
    $ python
    >>> import requests
    >>> response = requests.get(‘http://0.0.0.0:5000′)
    >>> response.text
    u’Hello World!’
    >>> exit()

  16. Heroku requires a requirements.txt file in the root of your repository to identify it as a Python application. Fortunately,  pip does this for us. From the root directory:
  17. $ pip freeze > requirements.txt
    $ cat requirements.txt
    Flask==0.9
    Jinja2==2.7
    MarkupSafe==0.18
    Werkzeug==0.8.3
    argparse==1.2.1
    distribute==0.6.34
    gunicorn==0.17.4
    wsgiref==0.1.2

  18. Create and setup your .gitignore file to disregard files we don’t want being uploaded to our repository.
  19. $ vi .gitignore

    1 venv
    2 *.pyc

  20. Next, initiliaze the git repo, add your app to it, and commit it.
  21. $ git init
    $ git add .
    $ git commit -m “Initial commit of my Heroku app.”

  22. Push your repo to Heroku and launch your app.
  23. $ heroku create
    $ git push heroku master
    *Note: If you see something like: ‘Push rejected, no Cedar-supported app detected’, verify that your requirements.txt exists, is complete, and it is in the root of your application directory structure.*
    $ heroku ps:scale web=1

  24. Visit your app!
  25. $ heroku open

And that’s basically it. Please visit the official Heroku Python docs for more information.

A few useful commands:

  • heroku logout — logs you out
  • heroku ps:scale worker=0 — stops your dyno
  • heroku run python — launches an interactive Python shell

-H.

Selenium: Initial thoughts + code

I spent the greater part of a day getting familiar with Selenium. Selenium is, as per its homepage, a suite of tools for automating web browsers. So, why would I want to bother automating browsers? Excellent question.

Selenium provides developers with an easy way to automate testing common use cases. For example, If I want to make sure that when a user is attempting to login to a page and they hit the enter key after entering there credentials it works, in Firefox, or Chrome, or IE, I can do that. Furthermore combined with tools like Sauce Labs I can automate this one step further. I can write one test and now I’m testing that the my code runs as expected (or discovering it doesn’t) on up to 128 different browser and version combinations! That is powerful.

Continuing on with the aforementioned login example, let’s dig into some actual code.
This is an example of an early version of a test I wrote in Python using Selenium’s WebDriver.

First, unittest and the relevant selenium utils are imported: webdriver and Keys. The former allows Selenium to use a native browser and Keys provides functionality replicating key strokes and mouse clicks.

import unittest

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

class AccountProfileTest(unittest.TestCase):

After subclassing Python’s unittest, setUp and tearDown methods allow us to properly prepare the environment for each test as well as take care of any cleanup necessary after it’s been run. The setUp function is where we will be focusing as it is where the login is performed.

First, test data is declared that will be used during user creation. Next, we launch a Firefox instance and direct it toward the desired url. The implicitly_wait function ensures our driver will allow the page to load before attempting to ‘find’ elements within the code. This prevents the driver from failing to locate specific elements. It should be noted that although Selenium is pointed a a local web server, this test was constructed for openscienceframework.org.

    # initialize a Firefox webdriver
    def setUp(self):
        # setup browser and test user data
        self.user_email = 'test@test.com'
        self.user_password = 'testtest'
        self.user_name = 'Testy McTester'
        self.driver = webdriver.Firefox()
        # ensure driver allows page source to load before continuing
        self.driver.implicitly_wait()
        # load OSF homepage
        self.driver.get('http://localhost:5000')

Here is where it gets interesting. Selenium is a browser automation tool, right? Right. So how do we replicate a mouse click on a specific button? Well, first we need to find it. As an aside, I _highly_ recommend using Firebug or a similar tool if you decide to play with Selenium and manually walk through test development. After first locating the element you want to click in the source we use one of Selenium’s find_element_by_* methods to focus on it. Selenium provides many find functions such as id, css, text, link_text, and xpath to name a few. Futhermore, you have the option of either find_element_by_* to locate the first element found or find_elements_by_* which returns a list of all elements found matching the pattern. In this example I focus on xpath but give examples of grabbing both a single element as well as a list.

The first find method I call is for the a link on the page directed toward the ‘/account’ url. Clicking it with a mouse is emulated by chaining the click method onto the end of the call.

        # load login page
        self.driver.find_element_by_xpath('//a[@href="/account"]').click()

After the page has loaded (as per driver.implicitly_wait() in the setUp method), the driver locates and assigns a list of username and password fields to username_fields and password_fields respectively. You may have noticed that find_elements_by_xpath was used here. find_element_by_* would have worked all the same but I thought it would be nice to include an example.

        # grab the username and password fields
        username_fields = self.driver.find_elements_by_xpath('//form[@name="signin"]//input[@id="username"]')
        password_fields = self.driver.find_elements_by_xpath('//form[@name="signin"]//input[@id="password"]')

Now that we have a list of username and password fields how do we enter Testy McTester’s login credentials? Selenium makes this easy for us with the send_keys method. As with the simulated mouse click, once you have located the pointer call the appropriate method, send_keys, along with whatever you desire. Here, point at the first username and password field in each list and send the user_email and user_password constants declared in setUp.

        # enter the username / password
        username_fields[0].send_keys(self.user_email)
        password_fields[0].send_keys(self.user_password)

Finding the ‘sign in’ submit button is a bit trickier. First, we must find all button that have ‘submit’ types. Next, we iterate through the returned list and find the button whose text is ‘sign in.’ Once we have it, we can simulate the clicking it with a mouse with the click method.

        # locate and click login button
        submit_buttons = self.driver.find_elements_by_xpath('//button[@type="submit"]')
        submit_button = [b for b in submit_buttons if b.text.lower() == 'sign in'][0]
        submit_button.click()

After setUp has completed, the first unittest below will be executed and any assertions declared will be tested. After it has completed the tearDown function will execute. Here, it simply closes the Firefox instance deleting any cookies and clearing the cache to ensure that if setUp is called again it will have a clean slate.

    # close the Firefox webdriver
    def tearDown(self):
        self.driver.close()

And that’s it. This is a very simple example of what Selenium can do and by no means shows anywhere near all of its potential. Not only is Selenium an incredibly powerful tool but it is really fun to work with as well. Manually walking through each step in the browser with Firebug provided me with a pleasant escape from the standard code, code, code.

The current version of this test, and many, many more are up on GitHub and are free for use. My co-workers and I would also appreciate any feedback you may have!

-H.