Transformation Matrices for Robotic Arms

Python functions for serial manipulators.

# -*- coding: utf-8 -*-
"""
Functions for calculating Basic Transformation Matrices in 3D space.
"""
from math import cos, radians, sin
from numpy import matrix

def rotate(axis, theta, angular_units='radians'):
    '''Compute Basic Homogeneous Transform Matrix for
    rotation of "theta" about specified axis.'''
    #Verify string arguments are lowercase
    axis=axis.lower()
    angular_units=angular_units.lower()
    #Convert to radians if necessary
    if angular_units=='degrees':
        theta=radians(theta)
    elif angular_units=='radians':
        pass
    else:
        raise Exception('Unknown angular units.  Please use radians or degrees.')
    #Select appropriate basic homogenous matrix
    if axis=='x':
        rotation_matrix=matrix([[1, 0, 0, 0],
                               [0, cos(theta), -sin(theta), 0],
                               [0, sin(theta), cos(theta), 0],
                               [0, 0, 0, 1]])
    elif axis=='y':
        rotation_matrix=matrix([[cos(theta), 0, sin(theta), 0],
                               [0, 1, 0, 0],
                               [-sin(theta), 0, cos(theta), 0],
                               [0, 0, 0, 1]])  
    elif axis=='z':
        rotation_matrix=matrix([[cos(theta), -sin(theta), 0, 0],
                               [sin(theta), cos(theta), 0, 0],
                               [0, 0, 1, 0],
                               [0, 0, 0, 1]])
    else:
        raise Exception('Unknown axis of rotation.  Please use x, y, or z.')
    return rotation_matrix

def translate(axis, d):
    '''Calculate Basic Homogeneous Transform Matrix for
    translation of "d" along specified axis.'''   
    #Verify axis is lowercase
    axis=axis.lower()
    #Select appropriate basic homogenous matrix
    if axis=='x':
        translation_matrix=matrix([[1, 0, 0, d],
                                  [0, 1, 0, 0],
                                  [0, 0, 1, 0],
                                  [0, 0, 0, 1]])
    elif axis=='y':
        translation_matrix=matrix([[1, 0, 0, 0],
                                  [0, 1, 0, d],
                                  [0, 0, 1, 0],
                                  [0, 0, 0, 1]])
    elif axis=='z':
        translation_matrix=matrix([[1, 0, 0, 0],
                                  [0, 1, 0, 0],
                                  [0, 0, 1, d],
                                  [0, 0, 0, 1]])
    else:
        raise Exception('Unknown axis of translation.  Please use x, y, or z.')
    return translation_matrix

if __name__=='__main__':
    #Calculate arbitrary homogeneous transformation matrix for CF0 to CF3
    H0_1=rotate('x', 10, 'degrees')*translate('y', 50)
    H1_2=rotate('y', 30, 'degrees')*translate('z', 10)
    H2_3=rotate('z', -20, 'degrees')*translate('z', 10)
    H0_3=H0_1*H1_2*H2_3
    print(H0_3)

Also available on GitHub.

Reading & Writing Excel Data with Python

Using pandas to read/write data in Excel.

In this post we’re going to explore how easy it is to read and write data in Excel using Python.  There’s a few different ways to do this.  We’re going to use pandas.  The pandas DataFrame  is the main data structure that we’re going to be working with.

Reading

The sample Excel data we’ll be using is available on Tableau’s Community page.

To load a single sheet of the Excel file into Python, we’ll use the read_excel function:

import pandas as pd
sales_data=pd.read_excel(r'C:\Users\Craig\Downloads\Sample - Superstore Sales (Excel).xls')

This loads one tab of the spreadsheet (.xls, .xlsx, or .xlsm) into a DataFrame.

In fact, if we didn’t want to download the Excel file locally, we can load it into Python directly from the URL:

sales_data_fromURL=pd.read_excel('https://community.tableau.com/servlet/JiveServlet/downloadBody/1236-102-1-1149/Sample%20-%20Superstore%20Sales%20(Excel).xls')

Note that we can load specific sheets (sheetname), grab specific columns (parse_cols), and handle N/A values (na_values) by using the optional keyword arguments.

To load all of the sheets/tabs within an Excel file into Python, we can set sheetname=None:

sales_data_all=pd.read_excel(r'C:\Users\Craig\Downloads\Sample - Superstore Sales (Excel).xls', sheetname=None)

This will return a dictionary of DataFrames – one for each sheet.

Writing

Writing existing Python data to an Excel file is just as straightforward.  If our data is already a DataFrame, we can call the pd.DataFrame.to_excel(‘filename.xlsx’) function.  If not, we can just convert the data into a DataFrame and then call to_excel.

import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(50,50))
df.to_excel('MyDataFrame.xlsx')

This will work for the .xls, .xlsx, and .xlsm.  Pandas also writer functions such as to_csv, to_sql, to_html, and a few others.

To write data on multiple sheets, we can use the pd.ExcelWriter function as shown in the pandas documentation:

with pd.ExcelWriter('filename.xlsx') as writer:
    df1.to_excel(writer, sheet_name='Sheet1')
    df2.to_excel(writer, sheet_name='Sheet2')

Quick Data Grabs

Try experimenting with the

pd.read_clipboard() #and
pd.to_clipboard()

functions to quickly transfer data from Excel to Python and vice-versa.

Thank you, pandas, for creating and maintaining excellent documentation.

Creating Images with PyQRCode

Mass generation of QR codes with Python.

This is a script for taking a list of URLs from a spreadsheet and generating a captioned QR code for each entry.

Specifically, the script reads the ‘LongURLs‘ input file, shortens the URLs, creates QR Codes, adds captions, and saves each QR code as a .PNG image file.

We shorten the URLs to reduce the complexity of the QR code, which makes it less likely to become unreadable from printing imperfections and dirt smudges.

We’ll use:

1. Numpy
2. Pandas
3. PyQRCode
4. pyshorteners
5. PIL
6. PyPNG

We load our URLs and IDs (captions) using the LongURLs template.

longurl_09172016
LongURLs Template

Next, we run the script and our QR codes will be output as PNG files in the same directory as our script.

Email links (such as “mailto:test@mailinator.com”) can be used as input URLs, but you’ll need to disable the ValueError:’Please enter a valid url’ that pyshorteners will raise.

import numpy as np
import pyqrcode
import pandas as pd
from pyshorteners import Shortener
from PIL import ImageFont
from PIL import Image
from PIL import ImageDraw

shortener=Shortener('Tinyurl',timeout=10)
DF = pd.DataFrame(pd.read_excel(r'C:\Users\Craig\Documents\Python Scripts\LongURLs.xlsx',
                                sheetname='LongURLs',parse_cols='A:B'))
LongURL=DF.iloc[:,0]
ID=DF.iloc[:,1]

ShortURL=np.array(LongURL, dtype='str')

for i in range(0,len(LongURL)):
    ShortURL[i]=shortener.short(LongURL[i])
    code=pyqrcode.create(ShortURL[i])
    code.png(ID[i] + '.png', scale=6, module_color=[0,0,0,128],quiet_zone=7) 

    #Adds caption
    img=Image.open(ID[i] + '.png')
    draw=ImageDraw.Draw(img)
    font = ImageFont.truetype("ariblk.ttf", 20)
    xcor=100
    draw.text((xcor,245),str(ID[i]),font=font)
    img.save(str(ID[i]) + '.png')

book-1book-2book-3book-4book-5

With pyshorteners, we have the option of using a bunch of different URL shorteners – in this case we used TinyURL.  See the pyshorteners github for a full list.

The font of your caption can be adjusted by taking the desired font’s .tff file (found in Control Panel > Appearance and Personalization > Fonts), copying it into the same folder as your script, and updating line 25.

You might need to adjust the “xcor” value (based on the length of your IDs) to get your caption centered under the QR image.  If your ID lengths are all different, consider adding a few lines of code to detect the ID length and update “xcor” dynamically.

Finding Correlations

Script for normalizing and finding correlations across variables in a numeric dataset.  Data can be analyzed as a whole or split into ‘n’ many subsets.  When split, normalizations are calculated and correlations are found for each subset.

Input is read from a .csv file with any number of columns (as shown below).  Each column must have the same number of samples.  Script assumes there are headers in the first row.

Input

import numpy as np

#Divides a list (or np.array) into N equal parts.
#http://stackoverflow.com/questions/4119070/how-to-divide-a-list-into-n-equal-parts-python
def slice_list(input, size):
    input_size = len(input)
    slice_size = input_size // size
    remain = input_size % size
    result = []
    iterator = iter(input)
    for i in range(size):
        result.append([])
        for j in range(slice_size):
            result[i].append(iterator.__next__())
        if remain:
            result[i].append(iterator.__next__())
            remain -= 1
    return result

#Functions below are from Data Science From Scratch by Joel Grus
def mean(x):
    return sum(x)/len(x)

def de_mean(x):
    x_bar=mean(x)
    return [x_i-x_bar for x_i in x]

def dot(v,w):
    return sum(v_i*w_i for v_i, w_i in zip(v,w))

def sum_of_squares(v):
    return dot(v,v)

def variance(x):
    n=len(x)
    deviations=de_mean(x)
    return sum_of_squares(deviations)/(n-1)

def standard_deviation(x):
    return np.sqrt(variance(x))  

def covariance(x,y):
    n=len(x)
    return dot(de_mean(x),de_mean(y))/(n-1)

def correlation(x,y):
    stdev_x=standard_deviation(x)
    stdev_y=standard_deviation(y)
    if stdev_x >0 and stdev_y>0:
        return covariance(x,y)/stdev_x/stdev_y
    else:
        return 0

#Read data from CSV
input_data=np.array(np.genfromtxt(r'C:\Users\Craig\Documents\GitHub\normalized\VariableTimeIntervalInput.csv',delimiter=",",skip_header=1))
var_headers=np.genfromtxt(r'C:\Users\Craig\Documents\GitHub\normalized\VariableTimeIntervalInput.csv',delimiter=",",dtype=str,max_rows=1)

#Determine number of samples & variables
number_of_samples=len(input_data[0:,0])
number_of_allvars=len(input_data[0,0:])

#Define number of samples (and start/end points) in full time interval
full_sample=number_of_samples
full_sample_start=0
full_sample_end=number_of_samples

#Define number of intervals to split data into
n=2
dvar_sublists={}
max_sublists=np.zeros((number_of_allvars,n))
min_sublists=np.zeros((number_of_allvars,n))
subnorm_test=np.zeros((full_sample_end, number_of_allvars+1))

#Slice variable lists
for dvar in range(0,number_of_allvars):
    dvar_sublists[dvar]=slice_list(input_data[:,dvar],n)
    for sublist in range(0,n):
        max_sublists[dvar,sublist]=np.max(dvar_sublists[dvar][sublist])
        min_sublists[dvar,sublist]=np.min(dvar_sublists[dvar][sublist])

var_interval_sublists=max_sublists-min_sublists

#Normalize each sublist.
for var in range(0, number_of_allvars):
    x_count=0
    for n_i in range(0,n):
        sublength=len(dvar_sublists[var][n_i])
        for x in range(0,sublength):
            subnorm_test[x_count,var]=(dvar_sublists[var][n_i][x]-min_sublists[var,n_i])/var_interval_sublists[var,n_i]
            subnorm_test[x_count,6]=n_i
            x_count+=1

var_sub_correlation=np.zeros((n,number_of_allvars,number_of_allvars),float)

#Check for correlation between each variable
for n_i in range(0,n):
    for i in range(0,number_of_allvars):
        icount=0
        for j in range(0,number_of_allvars):
            jcount=0
            starti=icount*len(dvar_sublists[i][n_i])
            endi=starti+len(dvar_sublists[i][n_i])
            startj=icount*len(dvar_sublists[j][n_i])
            endj=startj+len(dvar_sublists[j][n_i])
            var_sub_correlation[n_i,i,j]=correlation(subnorm_test[starti:endi,i],subnorm_test[startj:endj,j])

#Writes to CSV
np.savetxt(r'C:\Users\Craig\Documents\GitHub\normalized\sublists_normalized.csv',subnorm_test, delimiter=",") 

print(var_sub_correlation, 'variable correlation matrix')