Matt Seymour

When installing the lxml library from pip you may encounter errors during the install if a number of prerequisite files are not installed to your system. In ubuntu or debian these errors come in the form of:

x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-2.7.13=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w                                                                    
In file included from src/lxml/lxml.etree.c:111:0:                                                                                                                                                             
src/lxml/etree_defs.h:53:31: fatal error: libxml/xmlversion.h: No such file or directory                                                                                                                       
 #include "libxml/xmlversion.h"                                                                                                                                                                                
                               ^                                                                   
compilation terminated.                                                                  
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

To resolve these errors two libraries will need to be downloaded from apt.

sudo apt install libxml2-dev libxmlsec1-dev

Once installed re-run pip install lxml in your environment.

To build Python3 from source for Ubuntu or Debain is relatively simple; so long as you have installed the required prerequisites. Listed below are packages which will be required for a successful build:

sudo apt-get install -y make build-essential curl libssl-dev libbz2-dev zlib1g-dev libreadline-dev libsqlite3-dev llvm libncurses5-dev libncursesw5-dev tk-dev wget xz-utils

To build python you can follow the instructions as listed in the official python documentation (make sure you read them carefully and use make altinstall. Another and sometimes easier option is to use pyenvwhich is a version manager for python installations (https://github.com/pyenv/pyenv)

Unlike other programming languages Javascript does not have a native range function. By range function I am talking about a function which given an integer will return an Array/List with sequential values.

For example in Python you can do the following:

> range(5)
< [0, 1, 2, 3, 4]

Or, if you want the numbers 1-5:

> range(1, 6)
< [1, 2, 3, 4, 5]

The good news is you can simulate a basic range function with the following if not slightly elongated code:

> Array.from(Array(5), (_, x) => x +1 )
< [1, 2, 3, 4, 5]

After updating my machine today I came across the following error:

error: There was a problem with the editor 'vi'

The issue appears to be a problem relating to the default editor git is using. The issue can be easily fixed by running the following command:

git config --global core.editor `which vim`

It is simple to launch sublime text from command line on OSX you simply need to symlink a file found within the Sublime Text installation directory to /usr/local/bin.

This can be done using the following command:

ln -s /Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl /usr/local/bin/subl

Within your favourite OSX terminal / shell you can now use the command: subl to launch sublime text.

By default brew will give you access to the latest version of python via the brew install command. But what if you want to install a specific version?

The simplest option is to use pyenv it allows you to install python specific versions to your machine.

Usefully pyenv is available as a brew package:

brew install pyenv

Note: Read through the command line summary and caveats after installing. It will allow you to better configure your setup. For example I would rather make use of the "Homebrew directories rather than ~/.pyenv" to do this add the following line to your profile (.bashrc, .zshrc).

export PYENV_ROOT=/usr/local/var/pyenv

To enable auto complete also include:

if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi

Once installed you can now download and install the python specific versions you are looking for. To do this use the following commands:

pyenv install 3.4.4

If you are wanting to make this work with virtualenv you can set the python environment setting using the -p, --python flag.

virtualenv -p <path to python bin>

virtualenv -p /usr/local/var/pyenv/versions/3.4.4/bin/python
# or
virtualenv -p ~/pyenv/versions/3.4.4/bin/python

When trying to format json in vim you can use the following command which makes use of the python json.tool

:%!python -m json.tool

Here is an example of the command at work

1 {
2     "name": "my-project-name",
3   "version": "0.1.0",
4     "devDependencies": {
5             "grunt": "~0.4.5",
6                 "grunt-contrib-jshint": "~0.10.0",
7                 "grunt-contrib-watch":"~1.0.0"
8                           }
9                           }


:%!python -m json.tool



> Output
1 {
2     "devDependencies": {
3         "grunt": "~0.4.5",
4         "grunt-contrib-jshint": "~0.10.0",
5         "grunt-contrib-watch": "~1.0.0"
6     },
7     "name": "my-project-name",
8     "version": "0.1.0"
9 }

In bash and zsh you can use an in command shortcut to use the last parameter of the previous command as an argument for the next command.

For example say you view the contents of a file using cat:

cat /path/to/file/name.txt

Now you realise you want edit the file in some way. Instead of typing the command: vim /path/to/file/name.txt you can instead use:

vim !$

Or even the following example:

> cat /path/to/file/name.txt
> mv !$ /new/file/path.txt

Not only does this save you some valuable keystrokes, it is also quicker and less error prone.

By default Finder in OSX hides a lot of files from the user. This is useful if you are a standard user but when you need access to /usr, /var or /Library this is really annoying. To enable finder to show all enter the following command into terminal.

defaults write com.apple.finder AppleShowAllFiles TRUE

You will then either need to logout your profile or run the command:

sudo killall Finder

Last month I wrote the article 'How to convert a file path into a file url'. This month lets convert a file url into a python file path.

import urlparse, urllib

file_url = 'file:///Users/auser/a.file'
file_path = urllib.url2pathname(urlparse.urlparse(file_url).path)
> output: /Users/auser/a.file

The python code to convert a url path to a file:// url.

import urlparse, urllib
path = '/Users/myuser/a.file'

urlparse.urljoin(
  'file:', urllib.pathname2url(path)
)

> output:: file:///Users/myuser/a.file

As a backend developer I generally work with python, (some PHP) and C#. The front-end web development tools whilst alien to me are not something I get to spend a lot of time working on. So I thought I would try out writing my own grid system for CSS.

I introduce Griddy a(nother) micro CSS framework.

The idea behind Griddy was partly to be a learning exercise into CSS and responsive development but also to be something which I can use in various projects moving me away from Bootstrap. Whilst bootstrap has served me well and I make good use of it I find it is bloated for the functionality I require.

Current features of Griddy:

  • Mobile first
  • Responsive
  • Simple
  • Media breakpoints : 768px, 950px, 1200px

The source for Griddy can be found at https://github.com/mattseymour/griddy/.

Within linux it is possible to pause (freeze) and start processes.

Before you can pause a process you need to know the process ID (pid). This can be done using the ps command.

ps -A | grep <process_name>
# a process might not be obviously named

The output will be something like:

matt@wasdy : ~
   [1510|09:10:52] $  ps -A | grep chrome
    2777 ?        00:01:14 chrome
    2790 ?        00:00:00 chrome
    2811 ?        00:00:00 chrome
    2827 ?        00:00:59 chrome
    2835 ?        00:00:00 chrome
    2842 ?        00:00:02 chrome
    2862 ?        00:00:03 chrome
    2874 ?        00:00:00 chrome
    2890 ?        00:00:00 chrome

What we are interested in is the first number this is the process ID which we will use pause and start the process.

Pausing a process

In the command line run the following command:

sudo kill -STOP <pid>

At this point your application will appear to stop working. Well the process is paused... duh

Re-starting a process

In the command line run the following command:

sudo kill -CONT <pid>

Linux kernel 4 has been release and can be installed and upgraded. Whilst the Ubuntu repository is still currently on version < 3.19. Kernel 4 does have a number of useful and features and bug fixes.

To install and upgrade your current kernel version you first need to download the deb packages from kernel.ubuntu.com. To install a new kernel version you will need to download 3 packages.

  • linux-headers-4.*_all.deb
  • linux-headers-4.*_{i386|amd64}.deb
  • linux-image-4.*_{i386|amd64}.deb

Downloads can be found in the kernel.ubuntu.com main steam site.

When choosing your download I recommend not downloading anything which is an RC (release candidate) unless you need to for a specific reason (like bug fixes). The latest kernel releases will be towards the bottom of the page.

Once you have downloaded your three files as stated above proceed to install them using:

sudo dpkg -i ~<path-to-deb>
# install in the order stated above

Once installed you will need to update grub to use the newest installed version of the kernel. This is done by running:

sudo update-grub

Now, restart your machine and in terminal type uname -r your kernel version should now be updated.

Notes: Kernel version 3.19 has a bug which means juniper VPN will not work. If you are running this kernel version you will need to upgrade to a newer kernel version to resolve this issue. If you are running 15.04 you can upgrade to Linux kernel 4.

Juniper VPN kindly do not offer a 64bit deb package for Ubuntu users. This means you need to find a way to install and run the VPN on your own. With Ubuntu 15.04 being relatively new at the time of writing this may be a little more difficult than it needs to be for some users.

Hopefully this installation guide will get you up and running with little fuss.

First I started by installing openjdk and the icedtea plugin and xterm:

sudo apt-get install openjdk-7-jdk icedtea-7-plugin xterm

Followed by installing the 32bit version of the openjdk 7.

sudo apt-get install openjdk-7-jre:i386

Once installed you should check that update-alternatives is sym-linked in /usr/sbin/ this can be done by running the command:

ln -s /usr/bin/update-alternatives /usr/sbin/

Update the java version being used by update-alternatives to be java-7-openjdk-amd64/jre/bin/java:

sudo update-alternatives --config java
# select the option which is the recently downloaded java-7-openjdk-amd64/jre/bin/java

At this point if you try to run the juniper network tool you may get an error saying 32bit library files are missing. You will need to install the following packages so these errors do not appear.

sudo apt-get install libstdc++6:i386 lib32z1 lib32ncurses5 lib32bz2-1.0 libxext6:i386 libxrender1:i386 libxtst6:i386 libxi6:i386 libbz2-1.0:i386

Now restart firefox and log into the juniper vpn. You should now be able to access and install the VPN client software.

When installing pycurl via pip a number of errors can occur during the build process. The most likely cause of this is missing dependencies which are required to build the package. The good news is this can be easily fixed by installing the missing libraries and retrying the install.

Example error raised by pip when installing pycurl:

In file included from src/docstrings.c:4:0:
src/pycurl.h:145:31: fatal error: openssl/crypto.h: No such file or directory
 #   include <openssl/crypto.h>
                               ^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Solution

Within Ubuntu install sudo apt-get install libssl-dev libcurl4-openssl-dev python-dev. Then re-run the pip install pycurl command to install the package without error.

Linux kernel 4 has been release and can be installed and upgraded. Whilst the Ubuntu repository is still currently on version < 3.19. Kernel 4 does have a number of useful and features and bug fixes.

To install and upgrade your current kernel version you first need to download the deb packages from kernel.ubuntu.com. To install a new kernel version you will need to download 3 packages.

  • linux-headers-4.*_all.deb
  • linux-headers-4.*_{i386|amd64}.deb
  • linux-image-4.*_{i386|amd64}.deb

Downloads can be found in the kernel.ubuntu.com main steam site.

When choosing your download I recommend not downloading anything which is an RC (release candidate) unless you need to for a specific reason (like bug fixes). The latest kernel releases will be towards the bottom of the page.

Once you have downloaded your three files as stated above proceed to install them using:

sudo dpkg -i ~<path-to-deb>
# install in the order stated above

Once installed you will need to update grub to use the newest installed version of the kernel. This is done by running:

sudo update-grub

Now, restart your machine and in terminal type uname -r your kernel version should now be updated.

So last week I was lucky enough to find a weekend (21st-22nd March 2015) where I could afford the time to go up to London and participate in Django Sprint London. It was a one and a half day event focused on meeting fellow Django community members as well as being able to contribute further to the Django community. The event took place in the well equipped offices of Potato.

The event was split into two parts. Day one (evening meetup) was a chance for community members to meet, chat, and learn from some talks; as well as have a few beverages. Talks were given by:

Luke Benstead on Djangae Google Slide

Baptiste Mispelon on Contributing to Django in a Nutshell

Marc Tamlyn on Whats new in Django 1.8

Day 2: Let the sprints begin

Sunday morning was when the sprinting started. Arriving at the offices of Potato in London at 9am I was greeted with the smell of coffee and pastries. Upon setting up it was time to get down to business. To start the day off I picked a couple of easy picking tickets from Djangos current list of issues (Django uses trac for issue monitoring). Upon completing these tickets I then moved my attention to some more challenging tickets before lunch.

The afternoon was spent looking into support issues for the djangoproject.com website. Whilst contributing to the django framework is something I had done multiple times in the past, actually contributing to the project website was something I had previously overlooked. I would recommend others contributing to this project as it is something which can be overlooked by the community.

The end of the day for me was at 4:30, overall the day was a great learning and sharing experience. It was great to be able to put names to faces and meet new Django-ers from across Europe.

Whilst upgrading between Gitlab 6.x to 7.7 an error occurs because of a missing database migration.

After successfully running:

gitlab-rake db:migrate

I opened the webserver logs to see 500 errors from from the /projects/ url.

ActionView::Template::Error (PG::Error: ERROR:  column projects.imported does not exist

It looks as though whilst the migrations successfully complete they miss this one crucial migration.

Solution: Open your database connection and run the command.

ALTER TABLE projects ADD COLUMN "imported" BOOLEAN DEFAULT FALSE NOT NULL;
/usr/bin/env: node: No such file or directory

The above error can occur when running nodejs on Ubuntu. The cause of the error is simply that node cannot be found on your system PATH. The reason for this error is that within the Ubuntu software repository there is another application called node which can use /usr/bin/node which is not nodejs. To resolve this nodejs instead use the binary name nodejs and not node like some would rather it.

What this means is that some node modules reference the wrong node when in Ubuntu.

So the solution:

To resolve this issue you can simply symlink /usr/bin/nodejs to /usr/bin/node. This will mean both node and nodejs are available on the system path.

Open a terminal and using sudo enter:

sudo ln -s /usr/bin/nodejs /usr/bin/node

In some cases people have been finding ruby web applications in development very slow to respond (upwards of 3-7 seconds). The issue seems to be caused by the introduction of a new webrick configuration option DoNotReverseLookup which has a default value of nil (false). By simply changing the value of this (overriding the global default) webrick will not perform a reverse DNS lookup for each request substantially speeding up each page request.

Solution:

Change the default value of the webrick configuration option DoNotReverseLookup to true.

Old:

:DoNotReverseLookup => nil,

New:

:DoNotReverseLookup => true,

There is no built in git dry run option which is a shame as it is a feature I would use all the time. But there is a way to simulate this without polluting the git history.

Performing a git merge with no commit or no fast-forward will merge the two code bases together. This will allow you to examine, test, and undo the merge if required.

git merge --no-commit --no-ff <branch-name>

If you need to undo the commit you can use:

git merge --abort

This will return git to its state before the merge occurred.

If you are wanting to create a copy of a file from a specific git commit you can do so by using the following git command:

git show 32206111:my-app/indirectory/file-to-copy.py >> my-app/indirectory/copy.py

The first part of this command shows the content of a particular file for a specified commit. The second part of the command writes the content of the first part of the command to the specified file.

The Django send_mail function is a really simply way to be able to send an email via Django. It requires setting only a few parameters and Django settings in order to be able to send emails from your application.

A common question asked when using Django send_mail is how do I set the reply-to email header.


tldr; Use the EmailMessage class set the header (named argument) to {'Reply-To':'reply-to@example.com'}


What is Reply-To?

Reply-To serves as a way to respond to an email but to a different email address than an email was originally from. For example an email may be sent from a server (server@example.com) but the reply-to address maybe to an admin (admin@example.com); this will mean if you choose to reply to the message from server@example.com you will be sending an email to admin@example.com.

Whats the use of this?

Setting an email Reply-To means your Django web application can send emails from a valid email account. But all responses will be going to another valid email account. This comes in useful for things like contact forms using email.

So how do I set Reply-To in Django?

The easiest solutions is to make use of the class EmailMessage.


from django.core.mail import EmailMessage
email = EmailMessage('This is the subject', 'This is the body of your email', 'from@example.com',
        ['to@example.com',], headers = {'Reply-To': 'another@example.com'})
# send() sends the email
email.send()

If you run a virtual environment (virtualenv / virtualenvwrapper) at some point you will want to upgrade your python version. As it stands there is no built in way of doing this so the steps below are required to upgrade.

1. Freeze your current virtual environment pip freeze > requirements.txt

2. Remove your existing virtual environment


# if using virtualenvwrapper
rmvirtualenv <environment name>
# virtualenv
rm -r /path/to/virtual/environment

3. Create your new virtual environment


# virtualenvwrapper
mkvirtualenv -p /path/to/python/version <environment name>
# virtualenv
virtualenv -p /path/to/python/version <environment name>

4. Load new virtual environment


# virtualenvwrapper
workon <environment name>
# virtualenv
source /path/to/virtual/environment

5. Install the pip dependencies from requirements.txt


pip install -r /path/to/requirements/file

You have been working hard building your app. Now has come the time to deploy your application to production but there is one question you have which wsgi server shall I use? Your two real options for Django are Gunicorn or uWSGI. Both integrate easily with Django and you can get them up and running (as a beginner within an hour).

But which one should you choose?

tldr; Either Gunicorn or uWSGI correctly configured will give you more than adequate performance for your website. Go with the wsgi server you think is best suited for you.

Okay first things first...

I will not be comparing the performance of Gunicorn and uWSGI. There are a number of reasons, primarily being there are a large number of posts talking about performance. Simply put either wsgi server will give you more than enough performance if configured correctly.

Introducing uWSGI uWSGI is a high performance, powerful wsgi server. It has a massive number of configuration options (personally I think too many) to allow you to tailor your server to your needs. Though a large number of the options you will never need to use. uWSGI works well with nginx as nginx talks uwsgi by default. A basic wsgi server can be setup easily but to tailor the server for your system will take time as you figure out all the configuration options.

Introducing Gunicorn

Gunicorn is a simple, performant, and easy to setup and configure. Gunicorn may not be as performant as uWSGI (depending on your source ~10-15% down on max requests per second), but it is still fast the standard user would have no idea of the performance difference. What Gunicorn loses in performance it gains in simplicity and ease of use; Gunicorn has a fraction of the options than uWSGI. A Gunicorn wsgi server can be set up and running in less than 10 minutes.

So which one is better?

uWSGI is highly configurable (maybe too much so), whereas Gunicorn is simpler. Both have more than enough performance to run any website at scale. You have to remember that in most web applications the datastore and API calls will be the performance bottleneck.

The question you need to ask yourself is this: Is the extra time spent configuring uWSGI beneficial to your application, or would the time saved using Gunicorn. The Gunicorn settings file can easily be made to work with uWSGI; so one option is to start by using Gunicorn and if the situation requires it make the move to uWSGI.

My personal preference is towards the Gunicorn wsgi server. Whilst offering excellent performance, it is simple to configure, well documented and can be used whilst developing my Django applications locally. When used with supervisord it provides a stable well rounded system.

Earlier this month I released python-envvars, a python library which can read the contents of a .env file adding the key->values into pythons os.environ dictionary. The idea behind envvars is to make it easier to add environment variables into a python web application (django, flask etc).

By storing configuration settings within environment variables it allows your application to be easily moved from server to server without having to alter the code base of your application. Using .env files as to the host systems environment variables means you simply have to upload the .env file not messing around with the host system setup. The .env files can be stored in you secure files location keeping credentials safe.

A .env file looks something like:

SECRET_KEY=AbCdEfG
DEBUG=False
SOME_VARIABLE=1348904

The envvars package is available on pypi and can be installed using:

pip install envvars

To load the contents of the .env file into your application you simply call.

import envvars
import os

envvars.load('/path/to/file/.env')

print os.environ['SECRET_KEY']
> AbCdEfG

Envvars also has a get method which will get a value from os.environ. environ.get('KEY') has the added ability to return the value as its python type. For instance:

import envvars
import os

envvars.load('/path/to/file/.env')

print envvars.get('DEBUG')
> False # type boolean

print envvars.get('SOME_VARIABLE')
> 1348904 # type int

# if the .env value of SOME_VARIABLE was "1348904"
# Then the return value of envvars.get('SOME_VARIABLE') will be type str

Ways to use environment variables in Django.

  1. Store settings in environment variables

By storing settings in environment variables you can freely store your project on machines without the extra precautions relating to sensitive information.

Storing items like your SECRET_KEY, and database passwords within the environment variables mean they can be set on a per-environment basis.

Example


SECRET_KEY = os.environ.get('PROJECT_SECRET_KEY')

Storing environment variables:

My single biggest annoyance with environment variables are setting them when provisioning or running a server in production. My personal solution is to use envvars a small package I created for handling environment variables when running applications as different users.