Building a better desk controller

At work we have AMQ height adjustable desks.  They use a controller some corporate type probably thought was really cool.  It is that one on the left and it screws into the underside of the desk.  The problem with it is the capacitive touch buttons.  They are unreliable when you want them to work and they often react when you just get near them with your chair causing most people in the office here to set their desks to one position and then unplug them.  They get away with this because they appear to work in a demo and when they sell them to businesses and they are cheaper than the competition that probably makes a better controller.  The rest of the desk is fine so I decided to make my own controller.

To start off I need to figure out how the controller works.  To do this I used a Analog Discovery USB scope.  It was easily the best tool I’ve bought for my own electronics hobbies and is worth double what they charge for it.  The software is awesome and works on mac, linux, and windows.  There is even an API so you can use it with your own code.  Using a breadboard and a couple Ethernet jack breakouts I made a way to sniff the lines driven by the controller.  With this I drew this diagram:

If you are smarter than me you will see a problem with this diagram that I discover later.  However you can see that it is relatively simple.  Each of the buttons pull one or two of the lines to ground when then are pressed.  Also, I figured out how the number display works.  It is just a UART line.  Here are my notes on how the UART works:

When the numbers are displayed there are short bursts of 4 8-bit values sent over the UART line.  When height number is displayed it sends two 1’s and then the height value as a integer.  This then translates to a fractional value by dividing by 10.  Secondly, when you press the memory button to set memory it sends 1, 6, a bit that corresponds to the memory setting, and then 0.  Lastly there is an error state that displays A,S,and most of a F on the screen.  This happens when the encoder has lost it’s place and you have to run the thing all the way to the bottom for it to zero again.  Here is an image from the software for the logic analyzer showing a burst on the serial line:

One thing I really like about the Waveforms software is how easy the cursors are.  Expensive rack mounted test equipment usually doesn’t have this useful of cursors.  My next step was selecting the components I was going to use to build my controller.  Here is what I chose (some of these were used because they were parts I had on hand from previous projects):

To build this here are the tools I used:

First I drilled out and test fitted my face plate on my enclosure.  Here is a few pictures of that process:

I then cutout the space for the display by drilling the corners and then cutting the plastic with a razor blade.  If I were to do this again I’d use a smaller drill bit for the corners and cut using a dremmel.  Doing it this way took a long time.For the buttons I soldered stranded wire onto the leads and crimped female 0.1 headers onto the other side.  Here are some pictures of that process:

One of my buttons was too close to the pcb mounting point to screw on the nut.  I used an appropriate sized drill bit to cut it away:

With all that done the next step was to prototype the circuit and program the microcontroler.  To do this I used a breadboard and prototyping wires.  Here is where I learned I missed a few things before.  For one the power connection.  Before I was a bit confused about pin 3 and 4 which both seemed to be connected to power.  By using the scope I discovered that pin 4 was only high when the other controller was plugged it however it was at a slightly smaller voltage than pin 3 which was always high.  This lead me to believe that the controller was pulling it high to let to motor controller know it was plugged in.  I tied it to the high line using a resistor and all seemed to work on the power front.

Next I discovered that if I wired up the buttons as I had drawn in my first diagram current would flow back through the lines I’d connected together and buttons that pulled only one line low (up and down) no longer worked after I connected those lines together on one side of some of the memory set buttons.  To fix this I used electronic one way valves, aka diodes with a circuit like this:

This worked great and I was on to programming the microcontroler.  This proved to be a bit challenging because the trinket I had chosen was so limited in pins and codespace.  Because it did not have a hardware UART I had to use the SoftwareSerial library.  For controlling the LED display I used the I2C connection and Adafruit’s library.  However, because of the limited code-space to get my binary to fit I had to comment out parts of Adafruit’s library that was for the matrix displays.  Here is the code I wrote for the trinket:

Here is the final circuits and a picture of my prototyping setup:

Once I proved that it all worked I soldered it all together on a prototyping board and assembled the box.  To mount the pcb I used hot glue to secure some standoffs.  I also used hot glue to secure the display in it’s spot.

Lastly I cutout the spot for the ethernet jack with a dremel and screwed the whole thing together.  I now use it on my desk at work and it is awesome.

How to connect a data API to your react app

This is something that seems like it should be obvious and it probably is to those who create websites regularly, however I struggled with it for a bit.  This tutorial from Fullstack React is awesome and helped me figure this out.  The TL;DR version is below.

Use two nodejs servers, one to serve the client application (that can then be bundled by webpack) and one to serve the api in development.  To run these at the same time use the tool concurrently.  Then when you are ready for deployment you can pack the client app (that you built into a static asset with webpack) and then serve it by the api server with this code:

Then you just copy the code to run the api server and client/build folder to your server and run it how you would normally there.  As to a example of how you serve data to your react app here is an example.

Server:

Client:

React Component:

For a more detailed example refer to the article listed above.  This is something I learned recently while improving rallyscores.com.  You can find the source of that site on GitHub.

 

Caching your requests to Nominatim with geopy to avoid timeouts using dynamodb

TL;DR; get the source code for this at my GitHub gist and use for your projects:

Github Gist

The Problem

Nominatim is a free way to turn address strings into latitude and longitude points for mapping.  This is really useful if you want to draw addresses on a map or something.  If you are like me you will just look up the example code and run with it till you realize something.  There is a terms of use and if you query the same address multiple times you will get temporarily blocked.  The terms of use state that you should cache the results.

The Plan

Here I’m going to show you how to do that with DynamoDB.  DynamoDB is a sql-less database that is supported on AWS.  It is simple to setup and use and is free for low volume use.  It is a great alternative to MongoDB and I plan on learning it and using it in future projects.  Before you get started you need to setup your local environment for using the AWS api.  Here is a link for how to do it: AWS API Quickstart.  Once you have that working I can talk you through how to make this work.

Create the Table

The first thing we are going to do is create a table.  For my hashed primary key I am specifying a string I’m calling ‘query’ that I will explain later.  This string is how I will look up results in the future as it is a representation of the string that I will send to Nominatim later.  You only have to specify the attributes that are also keys here as the database is schema-less.

Push first to database

To push a record to the database you use the function put_item.  It has one important parameter, Item, that you set to the dictionary that you want to push to the database.  It will create a new item or override any existing item where the key matches.

One thing I want to point out is that the floating point format the database accepts is only decimal.Decimal.  That is not the normal floating point value type in python (float) and is not what is returned by our geolocator for latitude and longitude.  There is a second complication, if you try to just initialize them as decimal.Decimal sometimes you will get an exception about storing an inexact decimal.  This is nastiness that results from python’s handling of floating point numbers and is part of why decimal.Decimal exists.  To get around this I just cast my float as a string and then initialize the decimal.Decimal with that string.  That will never cause it to be Inexact.  The downside is you loose a bit of precision doing this.  This is not a concern here as it is enough precision for what is returned by geocode.

You will notice that I’m doing some weird stuff with the query to address string.  This is something you might modify for you application but I did this as I am using this query as a filename for one of my projects and I hate spaces or comas in filenames.  Other than that this is rather simple.

Getting a record from the database

Here I show how I retrieve a record from the database.  When the item comes back from the database it is a json formatted string so we just use json.dumps to convert it back into a dictionary.   There is one other thing, DecimalEncoder.  This is used to help convert the decimal.Decimal objects back into either floats or ints depending on if they have a fractional part.

Putting it all together

Here we put it all together with two functions, get_address and get_query that are the real external interface to this short library.  These functions handle testing if the query string is in the database and if it is going and getting it from there.  If not it does the request out to Nominatim and stores the result in the database and gives you back your result.

 

And that is it folks.  Check out the gist, copy and paste it into your project and solve this annoying problem for good.

 

Rent Scout Report for Colorado

Rent Scout, the application for searching for ideal rental properties is finally generating reports.  Here is the report generated for yesterday.  You can see that it is estimating a really high rate of return on properties here in Colorado.  I don’t believe that is completely accurate as there are many costs I have not accounted for.  Anyway, here is a link where you can download the report:

Rent Scout Report 04/28/2017

Python Flashcards Game

This is part of my poker math game as there are topics from he book that work well for the format of flashcards. This is really simple program that demonstrates how to do several interesting things in python.

The first couple functions show a simple way to wait for the user to input some data. The reason I’m assigning the output to __ is to show that I’m not going to use that data. The double underscore variable is a standard way of expressing that in python.

At the beginning of the main (line 16) I call clear to clear out the terminal window and give me a fresh looking space. This shows the simple way to call system commands from python.

On line 20 we use the with ... as syntax to open the file. This is a special kind of statement for resources like files that need to be cleaned up when you finish using them. The advantage of with is that when execution goes out of scope it will clean up the file which will close it. That means that if any of the code inside throws an exception the file will be closed.

Line 21 uses the pythonic line reading syntax to read from the file. The variable cards refers to the file and for line in file will loop through a file a line at a time in python.

Line 24 is the next interesting syntax piece. To understand it we need to break it down. The first thing that split is separating the line into a list of strings, splitting anytime it sees a comma. The second thing is the list compression. List compression is a syntax that efficiently does something on each item in a list and the syntax is [action for item in list]. strip() is a function that removes whitespace and the beginning and end of a line and I’m using it specifically to clean up any extra whitespace in the csv file lines.

So there you have it, a simple flashcards python app you can use for studying. To follow the advancements in my python game go checkout my github repo and let me know what you think.

Poker Math Game – Pot Odds

I just started reading the book Essential Poker Math, Expanded Edition and decided to write some short python games to help train myself. The first game is called Pot Odds and is based off chapter 6 of the book. It deals only in whole numbers and assumes you have only one other opponent. This is a simplification that allows you to focus on becoming very quick doing the math in your head. Since this is based on python I’m requiring pyenv for managing python installs and dependencies.

Pyenv install:

Then place this in your .bashrc:

And lastly you will need to run this:

Game setup

Download the game using git or click on this link:

Then setup the environment and install the dependencies:

Lastly, you can start this first game with this command:

Searching Realtor.com with python

In this tutorial I will show you how I used python to get a list of addresses and prices from the website realtor.com in a given area.

Disclamer

Before we start I need to point out that realtor.com does not want webscraping against their site. They have gone so far as use a system built by Distil Networks to identify and block bots. Their main concern is that you scrape all the data off their site and build a competing site and get add revenue off their data or something. Here is a link to describe what the are doing. While technically I am using scraping (reading webpages with something other than a browser) I am not going to republish any information I gather. Also, I am only doing this to more efficiently use the information that they publish. I believe that while the advice below may violate their view of the letter of the law it does not violate the spirit of the law.

Source

The source for this can be found on my github gist. This was done for the rent-scout project. Over time rent-scout will be updated and some of what I post below will become out of date.

The first url

Before you can begin parsing any website you need to first find a url you can work with. The awesome thing about urls is that they are like folders that take arguments and you can string together a list of different arguments to find the address you want to access. I discovered that if I went to http://www.realtor.com/local/Colorado I got a table of links that showed the top cities in that state along with the number of listings available for rent and for purchase. I am mostly interested in the ones available for purchase because I want them as rental properties. So the first thing you can do to read the url is to use the libraries lxml and requests. You can install them with pip install lxml requests. This will read that page and print the html data in text form:

Troubleshooting Notes

I discovered really quickly that if I didn’t create a session with a user agent header that the site would put me into infinite redirects. This is probably one of their tactics to dissuade beginners form writing scripts to scrape their site. Sessions also have other benefits, one of which is speed as it reuses the TCP connection and does not attempt to set up a new one each time it sends a new request. If you don’t set the User-Agent to something browser like it will look something like this: python-requests/1.2.0. That is a dead giveaway to the server that you are not a browser, but instead python’s request library that is trying to get the server to respond with a html page that you can parse.

Parse that table

The steps for parsing the table of cities is to first create a tree from the html data then we need to find the xpath’s to the various items we want, and then we need to parse that data into a useful structure.

Build the tree

There is a tool in the html package from lxml that can build a tree from a string:

Find the XPaths

XPath is a way to navigate through elements and attributes in n XML document. The easiest way to find an xpath to an item you are looking for is to use Google Chrome. In order to find the xpath first open up the Developer Tools (View->Developer->Developer Tools). Next right-click on an element you are interested in and select inspect. That will highlight the line of html code that that link comes from. Right click on that line of code and then select Copy->Copy XPath. If you do that to the link for the city Aurora on the page http://www.realtor.com/local/Colorado you will get this result:

Parse the data

You will notice that the table is in some tag with the id of top-cities and that table has tr and td tags. tr is the row and td is the column. a is the link. The second column you will notice is a list of links to homes for sale in that area. I found this one to be particularly interesting so I used this code to investigate:

The output of this looks like this:

Now this is something I can work with. The next step was to get the name of the cities out of the title element and to do that I used this code:

Next I wanted to grab the links and the number of properties that were listed for sale, and the number of properties that were listed for rent for each location. This is the code I used to do that:

Syntax Note

You will notice that I’m using a technique to build lists that you may not have used before. It is called List Compression and here is a link that explains how to use it: List Compression

URL argument Note

The second thing you should notice about the code above is the argument that I’m adding the end of the url. With urls you can add arguments by listing them after a question mark in key,value pairs like this: ?key=value,key2=value2. I used this technique to set the variable pgsz to the value of the number of houses available for sale in that area. I discovered that this would cause the website to display all the listings in that area on one page. To figure this out I hovered my mouse over the next page button on the bottom of the result page and saw this argument. I changed it and noted what it did. This allows the next step to be much easier.

Getting a link to the listing

The next step proved to be much harder. I wanted to find the links on the next page that went to listings but one thing I found is that there were duplicate links and that grabbing the link was actually kind of difficult using xpath. Enter another tool. That tool is BeautifulSoap. It is a tool for searching through documents and finding what you are looking for. It is considerably easier than XPath. To test I chose to pick just one city for continuing on, Aurora. There are two steps to this part: getting the list of links to the listings and parsing data on those pages.

Getting a list of links to listings

Below is how I got the list of URL’s that went to addresses:

Making urls unique

You will notice a couple of things. First is the regular expression '[^?]*'. What this does is match until it finds a ? in the string. This is because the website has may copies of the same link with different arguments (probably so they can track on which links get clicked on). I wanted a list of unique arguments so I used a set. When you add an item to a set it checks to see if that item is already in the set, and if it is not it adds it. The second thing you’ll notice is that I’m searching through the soup object with a regular expression matching on the href parameter of find_all. This is complicated so I’ll break it down into parts. find_all takes a special type of parameter that allows you to pass key=value pairs into it that it reads as a dictionary. Inside this function it uses that dictionary to help it search the tags in the soup. In our case we want all tags with a href parameter (link), where that link contains the word detail. I discovered the format for the links by clicking on links in my browser and looking at the urls. All the listing links had the word detail in them.

On regular expressions

Regular expressions are very powerful for searching, yet they can also be very confusing. For a good primer on how to use them go to the Python Regular expression operations wiki page and read up. The trick to learning regular expressions is to use them. Set up a program and try parsing text. The only real way to learn to use something like regular expressions is to play around. And then once you understand the basics you can google for how to do something specific and you will often find many examples on how to do the specific thing you are looking for with regular expressions.

Parsing data out of those links

Once you have the links the next step is rather trivial. There is so much more that could be done here but I chose to just go to the links and get the address and price they are asking for now. Here is the code, there isn’t anything new here:

Next Steps

The next thing I’m going to do with this is gather more information about each of the listings and start to build a database locally that I can then use with other scripts without going out to the website each time to gather the same information. Doing this will make so I’m not being evil with the bandwidth on realtor.com’s server and making it more difficult for them to get the same information to other users. If you want to learn how to do this you should copy the code into a text file, run it, and then start playing around with it. Change it to work differently on your system and see what it does. And of course, I am not responsible for anything you do with this new power.

Rent Scout – Investing with Python

I started a new project on github rent_scout where I will be using python as a tool to explore real estate investing. My current plan goes as follows: Use python to scrape the web for properties to sell, rent in a given area, and as much data about the real costs of purchasing a property as a rental. The plan will revolve around the idea of trying to use readily available information to try to find markets and properties that would turn the largest guaranteed return on investment.

Real estate investing is a good hedge against inflation and can be largely done with the banks money. Ideally every property would be cashflow positive from day one but that might not be possible. The plan is to build a tool that models real estate investments and can offer some metric to assist in investing in the best the market has to offer at the best times. I know there are always risks in investing and that no model can be completely trusted, but I believe that by understanding as much as I can about the process I can gain a real advantage over other small time real estate inverters.

The plan would then be to find properties that could be purchased as long term investments, handed over to a good property management company, and ideally in 15-30 years you would have a property that you could sell. The reason for this is there are many costs involved in each real estate transaction and unless I got into a really bad deal it would make more sense to ride out anything the market does so long as I can generate enough from rent to pay off the property. Then once the property is paid off I would have real passive income.

The first thing to model is house prices and rent prices. The next step is building a tool that can scrape the web for this information. To start I’ve written a simple python script to calculate the monthly payment of a loan and the total cost of that loan. Once I have rent and housing prices I need to do some digging to find out all the real costs in owning a home and renting (including what good property management companies are available in the market I’m researching). I need to find out the tax codes in each state and area I’m interested in and build that into the model. I need to model the tax benefits of paying interest and so forth. Here are my next steps:

  1. Costs of owning a home (ask friends, read books, google, etc.)
  2. Hidden costs of mortgages and how to get the best mortgage
  3. Web scraper for houses listed in an area and their attributes
  4. Web scraper for rental prices and availability (supply) in a market
  5. Research what makes rentals more desirable