pisense¶
This package is an alternative interface to the Raspberry Pi Sense HAT. The major difference to the official API is that the various components of the Sense HAT (the screen, the joystick, the environment sensors, etc.) are each represented by separate classes which can be used individually or by the main class which composes them together.
The screen has a few more tricks including support for any fonts that PIL supports, representation as a numpy array (which makes scrolling by assigning slices of a larger image very simple), and bunch of rudimentary animation functions. The joystick, and all sensors, have an iterable interface too.
Links¶
- The code is licensed under the BSD license
- The source code can be obtained from GitHub, which also hosts the bug tracker
- The documentation (which includes installation, quick-start examples, and lots of code recipes) can be read on ReadTheDocs
- Packages can be downloaded from PyPI, but reading the installation instructions is more likely to be useful
Table of Contents¶
Installation¶
Raspbian installation¶
On Raspbian, it is best to obtain colorzero via the apt
utility:
$ sudo apt update
$ sudo apt install python-pisense python3-pisense
The usual apt upgrade method can be used to keep your installation up to date:
$ sudo apt update
$ sudo apt upgrade
To remove your installation:
$ sudo apt remove python-pisense python3-pisense
Other platforms¶
On other platforms, it is probably easiest to obtain colorzero via the pip
utility:
$ sudo pip install pisense
$ sudo pip3 install pisense
To upgrade your installation:
$ sudo pip install -U pisense
$ sudo pip3 install -U pisense
To remove your installation:
$ sudo pip remove pisense
$ sudo pip3 remove pisense
Getting started¶
Warning
Make sure your Pi is off while installing the Sense HAT.
Hardware¶
Remove the sense HAT from its packaging. You should have the following parts:
Attention
TODO package pictures
- The Sense HAT itself
- A 40-pin stand-off header. This usually comes attached to the Sense HAT and many people don’t realize it’s removable (until they try and unplug their Sense HAT and it comes off!)
- Eight screws and four stand-off posts.
To install the Sense HAT:
Attention
TODO installation pictures
Screw the stand-off posts onto the Pi from the bottom.
Warning
On the Pi 3B, some people have noticed reduced performance from using a stand-off post next to the wireless antenna (the top-left position if looking at the top of the Pi with the HDMI port at the bottom). You may wish to leave this position empty or simply skip using the stand-offs entirely (they are optional but make the joystick a little easier to use).
Push the Sense HAT onto Pi’s GPIO pins ensuring all the pins are aligned. The Sense HAT should cover most of the Pi (other than the USB / Ethernet ports).
If using the stand-offs, secure them to the Sense HAT from the top with the remaining screws. If you find you cannot align the holes on the Sense HAT with the stand-offs this is a sure-fire sign that the pins are misaligned (you’ve missed a row / column of GPIO pins when installing the HAT). In this case, remove the Sense HAT from the GPIO pins and try again.
Finally, apply power to the Pi. If everything is installed correctly (and you have a sufficiently up to date version of Raspbian on your SD card) you should see a rainbow appear on the Sense HAT’s LEDs as soon as power is applied. The rainbow should disappear at some point during boot-up. If the rainbow does not disappear this either means the HAT is not installed correctly or your copy of Raspbian is not sufficiently up to date.
First Steps¶
Start a Python environment (this documentation assumes you use Python 3, though the pisense library is compatible with both Python 2 and 3), and import the pisense library, then construct an object to interface to the HAT:
$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pisense
>>> hat = pisense.SenseHAT()
The hat
object represents the Sense HAT, and provides several attributes
which represent the different components on the HAT. Specifically:
hat.screen
represents the 8 x 8 grid of LEDs on the HAT.hat.stick
represents the miniature joystick at the bottom right of the HAT.hat.environ
represents the environmental (pressure, humidity and temperature) sensors on the HAT.hat.imu
represents the sensors of the Internal Measurement Unit (IMU) on the HAT.
The Screen¶
Let’s try controlling the screen first of all. The screen’s state is
represented as a two-dimensional ndarray
of (red, green,
blue)
values. The structure of the values is compatible with
Color
class from the colorzero library which makes them
quite easy to work with:
>>> from colorzero import Color
>>> hat.screen.array[0, 0] = Color('red')
You should see the top-left LED on the HAT light up red. It’s worth noting at this point that the two dimensions of the numpy array are rows, then columns so the first coordinate is the Y coordinate, and that 0 on the Y-axis is at the top. If this seems confusing (because graphs are typically drawn with the origin at the bottom left) consider that (in English at least) you start reading from the top left of a page which is why the origin of computer displays is there.
As for why the “X” coordinate comes second, this is due to the way image data is laid out in memory. “Bigger” dimensions (by which we mean slower moving dimensions) come first, followed by “smaller” dimensions. When dealing with a graphical display (or reading text in English), we move along the display first before moving down a line. Hence the “X” coordinate is “smaller”; it moves “faster” than the Y coordinate, changing with every step along the display whereas the Y coordinate only changes when we reach the end of a line.
Hence, just as we put “bigger” values first when writing out numbers (thousands, then hundreds, then tens, then units), or the time (hours, minutes, seconds), we write the “bigger” coordinate (the Y coordinate) first when addressing pixels in the display:
>>> hat.screen.array[0, 1] = Color('green')
>>> hat.screen.array[1, 0] = Color('blue')
Numpy’s arrays allow us to address more than one value at once, by “slicing” the array. We won’t cover all the details of Python slicing (see the linked manual page for full details), but here’s some examples of what we can do with slicing (and what bits are optional). We can turn four pixels along the top red in a single command:
>>> hat.screen.array[0, 0:4] = Color('red')
If the start of a slice is zero it can be omitted (if the end of a slice is unspecified it is the length of whatever you’re slicing). Hence we can change the entire upper left quadrant red with a single command:
>>> hat.screen.array[:4, :4] = Color('red')
We can omit both the start and end of a slice (by specifying “:”) to indicate we want the entire length of whatever we’re slicing. For example, to draw a couple of white lines next to our quadrant:
>>> hat.screen.array[:, 4] = Color('white')
>>> hat.screen.array[4, :] = Color('white')
We can also read the display as well as write to it. We can read individual elements or slices, just as with writing:
>>> hat.screen.array[0, 0]
(1., 0., 0.)
>>> hat.screen.array[4, :]
ScreenArray([(1., 1., 1.), (1., 1., 1.), (1., 1., 1.), (1., 1., 1.),
(1., 1., 1.), (1., 1., 1.), (1., 1., 1.), (1., 1., 1.)],
dtype=[('r', '<f4'), ('g', '<f4'), ('b', '<f4')])
This means we can scroll our display by assigning a slice to another (similarly shaped) slice. First we’ll take a copy of our display so we can get it back later, then we’ll use a loop with a delay to slide our display left:
>>> original = hat.screen.array.copy()
>>> from time import sleep
>>> for i in range(8):
... hat.screen.array[:, :7] = hat.screen.array[:, 1:]
... sleep(0.1)
...
Neat as that was, the screen object actually has several methods to make animations like this easy. Let’s slide our original back onto the display:
>>> hat.screen.slide_to(original, direction='right')
We can construct images for the display with the array()
function. Let’s
construct a blue screen (thankfully not of death!) and fade to it:
>>> blue_screen = pisense.array(Color('blue'))
>>> hat.screen.fade_to(blue_screen)
The array()
function can also be given a list of values to initialize
itself. This is particularly useful with Color
aliases a
single letter long. For example, to draw a French flag on our display:
>>> B = Color('black')
>>> r = Color('red')
>>> w = Color('white')
>>> b = Color('blue')
>>> black_line = [B, B, B, B, B, B, B, B]
>>> flag_line = [B, b, b, w, w, r, r, B]
>>> flag = pisense.array(black_line * 2 + flag_line * 4 + black_line * 2)
>>> hat.screen.fade_to(flag)
Finally, if you’re familiar with the Pillow library (formerly PIL, the
Python Imaging Library) you can obtain a representation of the screen with the
image()
method. You can draw on this with the facilities of
Pillow’s ImageDraw
module then copy the result back to the Sense
HAT’s screen with the draw()
method (the image returned
doesn’t automatically update the screen when modified, unlike the array
representation):
>>> flag_img = hat.screen.image()
>>> from PIL import Image, ImageFilter
>>> blur_img = flag_img.filter(ImageFilter.GaussianBlur(1))
>>> hat.screen.draw(blur_img)
The Joystick¶
The miniature joystick at the bottom right of the Sense HAT is exceedingly
useful as a basic interface for Raspberry Pis without a keyboard. The joystick
actually emulates a keyboard (which in some circumstances is useful and in
others, very annoying) but it’s simpler to use the library’s facilities to read
the joystick rather than trying to treat it as a keyboard. The
read()
method can be used to wait for an event from the
joystick. Type the following then briefly tap the joystick to the right:
>>> hat.stick.read()
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 22, 52, 35, 961776),
direction='right', pressed=True, held=False)
As you’ve released the joystick there should be a “not pressed” event waiting to be retrieved. Notice that its timestamp is shortly after the former event (because the timestamp is the time at which the event occurred, not when it was retrieved):
>>> hat.stick.read()
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 22, 52, 36, 47511),
direction='right', pressed=False, held=False)
The read()
method can also take a timeout value (measured in
seconds). If an event has not occurred before the timeout elapses, it will
return None
:
>>> print(repr(hat.stick.read(1.0)))
None
The event is returned as a namedtuple()
with the following
fields:
timestamp
– the timestamp at which the event occurred.direction
– the direction in which the joystick was pushed. If the joystick is pushed inwards this will be “enter” (as that’s the key that it emulates).pressed
– this will beTrue
if the event occurred due to the joystick being pressed or held in a particular direction. If this isFalse
, the joystick has been released from the specified direction.held
– whenTrue
the meaning of this field depends on thepressed
field:- When
pressed
is alsoTrue
this indicates that the event is a repeat event occurring because the joystick is being held in the specified direction. - When
pressed
isFalse
this indicates that the joystick has been released but it was held down (this is useful for distinguishing between a press and a hold during the release event).
- When
Hence a typical sequence of events when briefly pressing the joystick right would be:
direction | pressed | held |
---|---|---|
right | True | False |
right | False | False |
However, when holding the joystick right, the sequence would be:
direction | pressed | held |
---|---|---|
right | True | False |
right | True | True |
right | True | True |
right | True | True |
right | True | True |
right | False | True |
Finally, the joystick can be treated as an iterator which yields events whenever they occur. This is particularly useful for driving interfaces as we’ll see in later sections. For now, you can try this on the command line:
>>> for event in hat.stick:
... print(repr(event))
...
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 10, 845258), direction='right', pressed=True, held=False)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 100073), direction='right', pressed=True, held=True)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 150078), direction='right', pressed=True, held=True)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 200125), direction='right', pressed=True, held=True)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 250146), direction='right', pressed=True, held=True)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 300088), direction='right', pressed=True, held=True)
StickEvent(timestamp=datetime.datetime(2018, 5, 4, 20, 6, 11, 316964), direction='right', pressed=False, held=True)
^C
Note
You’ll probably see several strange sequences appear on the terminal when
playing with this (like ^[[A
, ^[[B
, etc). These are the raw control
codes for the cursor keys and can be ignored. Press Ctrl-c when you
want to terminate the loop.
Environmental Sensors¶
The environmental sensors on the Sense HAT consist of two components: a pressure sensor and a humidity sensor. Both of these components are also capable of measuring temperature. For the sake of simplicity, both sensors are wrapped in a single item in pisense which can be queried for pressure, humidity, or temperature:
>>> hat.environ.pressure
1025.3486328125
>>> hat.environ.humidity
51.75486755371094
>>> hat.environ.temperature
29.045833587646484
The pressure is returned in millibars (which are equivalent to hectopascals). The humidity is given as a relative humidity percentage. Finally, the temperature is returned in celsius.
Despite there being effectively two temperature sensors there’s only a single
temperature
property. By default it returns the reading from the humidity
sensor, but you change this with the temp_source
attribute:
>>> hat.environ.temp_source
<function temp_humidity at 0x7515b588>
>>> hat.environ.temp_source = pisense.temp_pressure
>>> hat.environ.temperature
29.149999618530273
>>> hat.environ.temp_source = pisense.temp_humidity
>>> hat.environ.temperature
25.24289321899414
Note that both temperature readings can be quite different! You can also configure it to take the average of the two readings:
>>> hat.environ.temperature_source = pisense.temp_average
27.206080436706543
However, if you think this will give you more accuracy, I’d recommend referring to Dilbert first!
Like the joystick, the environment sensor(s) can also be treated as an iterator:
>>> for reading in hat.environ:
... print(repr(reading))
...
EnvironReadings(pressure=1025.41, humidity=51.1534, temperature=27.1774)
EnvironReadings(pressure=1025.41, humidity=50.9851, temperature=27.2261)
EnvironReadings(pressure=1025.41, humidity=50.9851, temperature=27.2271)
EnvironReadings(pressure=1025.42, humidity=50.9851, temperature=27.2240)
EnvironReadings(pressure=1025.42, humidity=50.9209, temperature=27.2240)
EnvironReadings(pressure=1025.42, humidity=50.9209, temperature=27.2230)
EnvironReadings(pressure=1025.42, humidity=50.9209, temperature=27.2261)
EnvironReadings(pressure=1025.42, humidity=50.9209, temperature=27.2271)
EnvironReadings(pressure=1025.42, humidity=51.0693, temperature=27.2331)
^C
Note
As above, press Ctrl-c when you want to terminate the loop.
A simple experiment you can run is to breathe near the humidity sensor and then query its value. You should see the value rise quite rapidly before it slowly falls back down as the vapour you exhaled evaporates from the surface of the sensor.
Inertial Measurement Unit (IMU)¶
The Inertial Measurement Unit (IMU) on the Sense HAT actually consists of three different sensors (an accelerometer, a gyroscope, and a magnetometer) each of which provide three readings (X, Y, and Z). This is why you may also hear the sensor referred to as a 9-DoF (9 Degrees of Freedom) sensor; it returns 9 independent values.
You can read values from the sensors independently:
>>> hat.imu.accel
IMUVector(x=0.0404885, y=0.0551139, z=1.01719)
>>> hat.imu.gyro
IMUVector(x=0.044841, y=0.00200727, z=-0.0528594)
>>> hat.imu.compass
IMUVector(x=-21.1644, y=-12.2358, z=18.4494)
The accelerometer returns values in g (standard gravities, equivalent to 9.80665m/s²). Hence, with the Sense HAT lying flat on a table, the X and Y values of the accelerometer should be close to zero, while the Z value should be close to 1 (because gravity is a constant acceleration force toward the center of the Earth … assuming that you’re on Earth, that is).
The gyroscope returns values in radians per second. With the Sense HAT lying stationary all values should be close to zero. If you wish to test the gyroscope, set the console to continually print values and slowly rotate the HAT:
>>> while True:
... print(hat.imu.gyro)
... sleep(0.1)
...
IMUVector(x=0.0437177, y=0.00241541, z=-0.0463548)
IMUVector(x=0.0408809, y=0.00207451, z=-0.0443745)
IMUVector(x=0.0428965, y=0.00294054, z=-0.0448299)
IMUVector(x=0.0376711, y=0.00259082, z=-0.0440765)
IMUVector(x=0.0376385, y=0.00705177, z=-0.0457381)
IMUVector(x=0.0276967, y=-0.00117483, z=-0.0446691)
IMUVector(x=-0.206876, y=-0.0201117, z=-0.128358)
IMUVector(x=-0.0773721, y=-0.523465, z=-0.318948)
IMUVector(x=-0.429841, y=-0.663047, z=0.0814746)
IMUVector(x=0.288231, y=-1.13005, z=-0.0245105)
IMUVector(x=-0.450611, y=-1.86431, z=-0.382783)
IMUVector(x=-0.173889, y=-1.05461, z=-0.238619)
IMUVector(x=-0.225202, y=-2.61934, z=-0.0840699)
IMUVector(x=-0.00529005, y=-1.86309, z=-0.000686785)
IMUVector(x=-0.00254116, y=-1.85271, z=0.115072)
IMUVector(x=-0.0382768, y=-0.26965, z=-0.374536)
Note
As above, press Ctrl-c when you want to terminate the loop.
Finally, the magnetometer returns values in µT (micro-Teslas, where 1µT is equal to 10mG or milli-Gauss). The Earth’s magnetic field is incredibly weak, so if you wish to test the magnetometer it is easier to do so with a permanent magnet, especially something strong like a small neodymium magnet. Bringing such a magnet within 10cm of the HAT should provoke an obvious reaction in the readings.
The readings from these three components are combined by the underlying library to form a composite “orientation” reading which provides the roll, pitch, and yaw of the HAT in radians:
>>> hat.imu.orient
IMUOrient(roll=0.868906 (49.8°), pitch=1.2295 (70.4°), yaw=0.818843 (46.9°))
Note that while the representation of the reading includes degree conversions
for the sake of convenience, the reading returned by querying the properties is
always in radians (you can convert to degrees with the built-in function
math.degrees()
).
>>> for state in hat.imu:
... print(repr(state))
...
IMUState(compass=IMUVector(x=-13.9255, y=-30.4649, z=-18.815), gyro=IMUVector(x=0.0393031, y=0.00371209, z=-0.0437528), accel=IMUVector(x=0.0409734, y=0.0517148, z=1.00427), orient=IMUOrient(roll=2.17333 (124.5°), pitch=-1.18527 (-67.9°), yaw=2.81119 (161.1°)))
IMUState(compass=IMUVector(x=-19.879, y=-29.4562, z=-7.37771), gyro=IMUVector(x=0.040144, y=-0.00145538, z=-0.0430174), accel=IMUVector(x=0.0431554, y=0.0495297, z=1.00939), orient=IMUOrient(roll=2.09063 (119.8°), pitch=-1.15771 (-66.3°), yaw=2.85458 (163.6°)))
IMUState(compass=IMUVector(x=-19.879, y=-29.4562, z=-7.37771), gyro=IMUVector(x=0.040144, y=-0.00145538, z=-0.0430174), accel=IMUVector(x=0.0431554, y=0.0495297, z=1.00939), orient=IMUOrient(roll=2.09063 (119.8°), pitch=-1.15771 (-66.3°), yaw=2.85458 (163.6°)))
IMUState(compass=IMUVector(x=-19.879, y=-29.4562, z=-7.37771), gyro=IMUVector(x=0.040144, y=-0.00145538, z=-0.0430174), accel=IMUVector(x=0.0431554, y=0.0495297, z=1.00939), orient=IMUOrient(roll=2.09063 (119.8°), pitch=-1.15771 (-66.3°), yaw=2.85458 (163.6°)))
IMUState(compass=IMUVector(x=-19.879, y=-29.4562, z=-7.37771), gyro=IMUVector(x=0.040144, y=-0.00145538, z=-0.0430174), accel=IMUVector(x=0.0431554, y=0.0495297, z=1.00939), orient=IMUOrient(roll=2.09063 (119.8°), pitch=-1.15771 (-66.3°), yaw=2.85458 (163.6°)))
IMUState(compass=IMUVector(x=-24.5605, y=-28.5779, z=1.99134), gyro=IMUVector(x=0.0379679, y=0.00247297, z=-0.0392915), accel=IMUVector(x=0.0421856, y=0.0500153, z=1.01597), orient=IMUOrient(roll=2.01459 (115.4°), pitch=-1.13169 (-64.8°), yaw=2.89324 (165.8°)))
IMUState(compass=IMUVector(x=-24.5605, y=-28.5779, z=1.99134), gyro=IMUVector(x=0.0379679, y=0.00247297, z=-0.0392915), accel=IMUVector(x=0.0421856, y=0.0500153, z=1.01597), orient=IMUOrient(roll=2.01459 (115.4°), pitch=-1.13169 (-64.8°), yaw=2.89324 (165.8°)))
IMUState(compass=IMUVector(x=-24.5605, y=-28.5779, z=1.99134), gyro=IMUVector(x=0.0379679, y=0.00247297, z=-0.0392915), accel=IMUVector(x=0.0421856, y=0.0500153, z=1.01597), orient=IMUOrient(roll=2.01459 (115.4°), pitch=-1.13169 (-64.8°), yaw=2.89324 (165.8°)))
IMUState(compass=IMUVector(x=-24.5605, y=-28.5779, z=1.99134), gyro=IMUVector(x=0.0379679, y=0.00247297, z=-0.0392915), accel=IMUVector(x=0.0421856, y=0.0500153, z=1.01597), orient=IMUOrient(roll=2.01459 (115.4°), pitch=-1.13169 (-64.8°), yaw=2.89324 (165.8°)))
Further Reading¶
This concludes the tour of the Raspberry Pi Sense HAT, and of the bare functionality of the pisense library. The next sections will introduce some simple projects to give you an idea of how the library can be used to combine these facilities to useful or fun effect!
Simple Demos¶
To get us warmed up before we attempt some complete applications, here’s some simple demos that use the functionality of the Sense HAT. Along with some demos there’s a small exercise, which you might like to try if you want to hone your skills with the library.
Rainbow Scroller¶
There are many different color systems, and the colorzero library that pisense relies upon implements several, including HSV (Hue, Saturation, Value). In this scheme, hue is essentially cyclic. This makes it quite easy to produce a scrolling rainbow display. We’ll construct an 8x8 array in which the hue of a color depends on the sum of its X and Y coordinates divided by 14 (as the maximum sum is 7 + 7), which will give us a nice range of hues. You can try this easily from the command line:
>>> from pisense import SenseHAT, array
>>> from colorzero import Color
>>> hat = SenseHAT()
>>> rainbow = array([
... Color(h=(x + y) / 14, s=1, v=1)
... for x in range(8)
... for y in range(8)
... ])
>>> hat.screen.array = rainbow
At this point you should have a nice rainbow on your display. How do we make this scroll? We simply construct a loop that increments the hue a tiny amount each time round. For example:
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array
from colorzero import Color
from time import sleep
hat = SenseHAT()
offset = 0.0
while True:
rainbow = array([
Color(h=(x + y) / 14 + offset, s=1, v=1)
for x in range(8)
for y in range(8)
])
hat.screen.array = rainbow
offset += 0.05
sleep(0.05)
Joystick Movement¶
In this demo we’ll move a dot around the screen in response to joystick moves. The easiest way to interact with the joystick is to treat it as an iterator (treating it as if it’s a rather slow list that only provides another value when something happens to the joystick). Most of the time you’re not that interested in the joystick events themselves, but rather on what they mean to your application.
Hence our first step is to define a generator function that transforms joystick events into relative X, Y movements:
def movements(events):
for event in events:
if event.pressed:
try:
yield {
'left': (-1, 0),
'right': (1, 0),
'up': (0, -1),
'down': (0, 1),
}[event.direction]
except KeyError:
break # enter exits
You can try this out from the command line like so:
>>> hat = SenseHAT()
>>> for x, y in movements(hat.stick):
... print('x:', x, 'y:', y)
...
x: 1 y: 0
x: 1 y: 0
x: 0 y: 1
x: 0 y: 1
x: -1 y: 0
Note
You may see several control characters like ^[[C
and ^[[D
appearing
as you play with this. These are the raw characters that represent the
cursor keys; this output can be ignored. Press the joystick in (generate
an “enter” event) when you want to terminate the loop.
Now, we’ll define another simple generator that transforms these into arrays for the display. Finally, we’ll use that output to drive the display:
from pisense import SenseHAT, array
from colorzero import Color
def movements(events):
for event in events:
if event.pressed:
try:
yield {
'left': (-1, 0),
'right': (1, 0),
'up': (0, -1),
'down': (0, 1),
}[event.direction]
except KeyError:
break # enter exits
def arrays(moves):
a = array(Color('black')) # blank screen
x = y = 3
a[y, x] = Color('white')
yield a # initial position
for dx, dy in moves:
a[y, x] = Color('black')
x = max(0, min(7, x + dx))
y = max(0, min(7, y + dy))
a[y, x] = Color('white')
yield a
a[y, x] = Color('black')
yield a # end with a blank display
with SenseHAT() as hat:
for a in arrays(movements(hat.stick)):
hat.screen.array = a
This pattern of programming, treating inputs as iterators and writing a series of transforms to produce screen arrays, will become a common theme in much of the rest of this manual.
Exercise
Can you convert the rainbow demo above to use an iterable for its display? Hint: the iterable doesn’t need to take any input because it’s not really transforming anything, just yielding outputs.
Orientation Sensing¶
Could we adapt the joystick example to “roll” the dot around the screen using the Inertial Measurement Unit (IMU)? Quite easily as it happens. The only thing that needs to change is the transformation that yields the changes in the X and Y positions. Instead of transforming joystick events, it needs to transform IMU readings.
As it happens, the IMU’s accelerometer is perfect for this task. When the HAT is tilted to the right, the X-axis of the accelerometer winds up pointing downward, which means it starts reading close to 1 (due to gravity). The same happens for the Y-axis when the HAT is tilted toward you. So, the transformation is quite trivial:
- Grab the accelerometer’s X and Y axes
- Clamp the values to the range -1 to 1 (we don’t want things moving too fast!)
- Round the values to the nearest integer (so we stay still until the HAT is tilted quite a lot)
- Don’t bother yielding a movement unless one value is non-zero
- Introduce a short delay (with
sleep()
) because the IMU is capable of spitting out readings hundreds of times a second, and we don’t want the dot shooting around that fast!
Here’s the modified movements
function:
def movements(imu):
for reading in imu:
delta_x = int(round(max(-1, min(1, imu.accel.x))))
delta_y = int(round(max(-1, min(1, imu.accel.y))))
if delta_x != 0 or delta_y != 0:
yield delta_x, delta_y
sleep(1/10)
Again, you can try this function out from the command line in the same manner as the joystick; just pass the IMU component to it instead:
>>> from pisense import SenseHAT
>>> hat = SenseHAT()
>>> for x, y in movements(hat.imu):
... print('x:', x, 'y:', y)
...
x: 1 y: 0
x: 1 y: 0
x: 0 y: 1
x: 0 y: 1
x: -1 y: 0
Here’s the whole thing put together. Note that the only substantial change from
the joystick demo above is the movements
function:
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array
from colorzero import Color
from time import sleep
def movements(imu):
for reading in imu:
delta_x = int(round(max(-1, min(1, imu.accel.x))))
delta_y = int(round(max(-1, min(1, imu.accel.y))))
if delta_x != 0 or delta_y != 0:
yield delta_x, delta_y
sleep(1/10)
def arrays(moves):
a = array(Color('black')) # blank screen
x = y = 3
a[y, x] = Color('white')
yield a # initial position
for dx, dy in moves:
a[y, x] = Color('black')
x = max(0, min(7, x + dx))
y = max(0, min(7, y + dy))
a[y, x] = Color('white')
yield a
a[y, x] = Color('black')
yield a # end with a blank display
with SenseHAT() as hat:
for a in arrays(movements(hat.imu)):
hat.screen.array = a
Exercise
Can you combine the orientation demo with the rainbow scroller and make the rainbow scroll in different directions based on the orientation of the board?
Environment Sensing¶
How about a simple thermometer? We’ll treat the thermometer as an iterator, and write a transform that produces a screen containing the temperature as both a number (in a small font), and a very basic chart which lights more elements as the temperature increases.
We’ll start with a function that takes a reading, limits it to the range of temperatures we’re interested in (0°C to 50°C), and distributes that evenly over the range 0 <= n < 64 (representing all 64 elements of the HAT’s display):
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Red
from time import sleep
import numpy as np
def thermometer(reading):
t = max(0, min(50, reading.temperature)) / 50 * 64
Next, we need to construct the crude chart representing the temperature. For
this we call array()
and pass it a list of 64 Color
objects which will be solid red if the element is definitely below the current
temperature, a scaled red for the element at the current temperature, and black
(off) if the element is above the current temperature. We also flip the result
as we want the chart to start at the bottom and work its way up:
screen = array([
Color('red') if i < int(t) else
Color('red') * Red(t - int(t)) if i < t else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
Next, we call draw_text()
which will return us a small
Image
object containing the rendered text (we’ve added some
padding at the bottom so the text is “top aligned”). We’ll convert that to an
array, and “add” that to the chart we’ve drawn (a simple method of overlaying)
and then clip the result to the range 0 to 1 (because where the text overlays
the chart we’ll probably exceed the bounds of the red channel):
text = image_to_rgb(draw_text(str(int(round(reading.temperature))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
Finally, here’s the whole thing put together:
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Red
from time import sleep
import numpy as np
def thermometer(reading):
t = max(0, min(50, reading.temperature)) / 50 * 64
screen = array([
Color('red') if i < int(t) else
Color('red') * Red(t - int(t)) if i < t else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.temperature))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
with SenseHAT() as hat:
for reading in hat.environ:
hat.screen.array = thermometer(reading)
sleep(0.5)
You can test this script by running it, then placing your finger on the humidity sensor (which is the sensor we’re using to read temperature). If the ambient temperature is below about 24°C you should see the reading rise quite quickly. Take your finger off the sensor and it should fall back down again.
Why, in this example, did we construct a function that took a single reading? Why did we not pass the environ iterator to the thermometer function? Quite simply because we didn’t have to: making an array for the screen works from a single reading. It doesn’t have any need to know prior readings, or to keep any state between frames, so it’s simplest to make it a straight-forward function. That said…
Exercise
Can you change the script to show whether the temperature is rising or
falling? Hint: passing the iterator to the transform is one way to do this,
but for a neater way (without passing the iterator), look up pairwise
in itertools
.
Project: Environment Monitor¶
Here’s our first “full” project for the Sense HAT: make an environmental monitor that can display the temperature, humidity, and pressure in a variety of forms. We’ve already seen a demo thermometer in Environment Sensing. First we’ll construct variants of this for the humidity and pressure sensors. Then we’ll combine all three into an application. Finally, we’ll add interactivity using the joystick to select the required functionality, recording the data to a database, and a trivial web interface.
Hygrometer¶
Firstly, let’s adapt our thermometer script for sensing humidity. Here’s the thermometer script again:
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Red
from time import sleep
import numpy as np
def thermometer(reading):
t = max(0, min(50, reading.temperature)) / 50 * 64
screen = array([
Color('red') if i < int(t) else
Color('red') * Red(t - int(t)) if i < t else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.temperature))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
with SenseHAT() as hat:
for reading in hat.environ:
hat.screen.array = thermometer(reading)
sleep(0.5)
We’ll use a very similar structure for our hygrometer. This time we don’t need to clamp the range (we’ll use the full 0% to 100%, but we’ll scale it to 0 <= n < 64 again). We’ll use a reasonably dark blue (“#000088” in HTML terms) for the chart, but everything else should look fairly familiar:
from __future__ import division # for py2.x compatibility
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Blue
from time import sleep
import numpy as np
def hygrometer(reading):
h = reading.humidity / 100 * 64
screen = array([
Color('#008') if i < int(h) else
Color('#008') * Blue(h - int(h)) if i < h else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text('^^' if reading.humidity > 99 else
str(int(round(reading.humidity))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
with SenseHAT() as hat:
for reading in hat.environ:
hat.screen.array = hygrometer(reading)
sleep(0.5)
The one other subtle change is in the caption. We can’t fit “100” on our display; it’s too wide (this wasn’t a problem for the thermometer where we clamped the temperature range from 0°C to 50°C; if you guessed this was for simplicity, you were right!). Instead, whenever the humidity is >99% we display “^^” to indicate the maximum value.
Test this script out by running it and then breathing gently on the humidity sensor. You should see the humidity reading rise rapidly (possibly to “^^”) then slowly fall back down.
Barometer¶
Next we’ll tackle the pressure sensor. This will have a very familiar structure by now:
- Clamp the pressure readings to a sensible range (in this case we’ll use 950mbar to 1050mbar).
- Scale this to the range 0 <= n < 64.
- Draw a rudimentary chart (we’ll use green to distinguish it from our thermometer and hygrometer scripts).
- Draw the pressure as a number superimposed on the chart.
Oh dear, there’s a problem! All the valid pressure values are too large to fit on the display, so we can’t use an easy hack like displaying “^^” as we did in the hygrometer above.
It’d be nice if the pressure reading could scroll back and forth on the display, still superimposed on the chart. It turns out, using iterators again, this is actually quite easy to achieve. What we want is a sliding window over our rendered text, like so:
Hence our first requirement is an infinite iterator which produces the “bouncing” X offset for the sliding window:
def bounce(it):
# bounce('ABC') --> A B C C B A A B C ...
return cycle(chain(it, reversed(it)))
Well, that was simple!
The cycle()
and chain()
functions come from
the standard library’s fantastic itertools
module which I urge anyone
using iterators to check out. The reversed()
function is a standard
built-in function in Python.
How do we combine the offsets produced by bounce
with the readings from the
sensor? We simply use the built-in zip()
function:
# NB: this script is not compatible with py2.x
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Green
from time import sleep
from itertools import cycle, chain
import numpy as np
def bounce(it):
# bounce('ABC') --> A B C C B A A B C ...
return cycle(chain(it, reversed(it)))
def barometer(offset, reading):
p = (max(950, min(1050, reading.pressure)) - 950) / 100 * 64
screen = array([
Color('green') if i < int(p) else
Color('green') * Green(p - int(p)) if i < p else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.pressure))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 8, 3)))
screen[:text.shape[0], :] += text[:, offset:offset + 8]
return screen.clip(0, 1)
with SenseHAT() as hat:
for offset, reading in zip(bounce(range(8)), hat.environ):
hat.screen.array = barometer(offset, reading)
sleep(0.2)
Note
This example will only work in Python 3 because it evaluates zip()
lazily. In Python 2, this will crash as zip attempts to construct a list
for an infinite iterator (use izip
from itertools
in Python 2).
Exercise
Can you adjust the hygrometer script so that it scrolls “100” when that is the reading, but smaller values stay static on the display?
Combining Screens¶
We now have the three scripts that we want for our environmental monitor, but how do we combine them into a single application? Our first step will be a simple one: to make a function that will rotate between each of our transformations periodically, first showing the thermometer for a few seconds, then the hygrometer, then the barometer.
The easiest way to do this is to modify our thermometer and hygrometer
transforms to take a (useless) offset parameter just like the barometer
transform. Then (because our functions all now have a common prototype, and
functions are first class objects in Python) we can construct a
cycle()
of transforms and just loop around them. The result
looks like this:
# NB: this script is not compatible with py2.x
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Red, Green, Blue
from time import sleep
from itertools import cycle, chain
import numpy as np
def thermometer(offset, reading):
t = max(0, min(50, reading.temperature)) / 50 * 64
screen = array([
Color('red') if i < int(t) else
Color('red') * Red(t - int(t)) if i < t else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.temperature))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
def hygrometer(offset, reading):
h = reading.humidity / 100 * 64
screen = array([
Color('#008') if i < int(h) else
Color('#008') * Blue(h - int(h)) if i < h else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text('^^' if reading.humidity > 99 else
str(int(round(reading.humidity))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
def barometer(offset, reading):
p = (max(950, min(1050, reading.pressure)) - 950) / 100 * 64
screen = array([
Color('green') if i < int(p) else
Color('green') * Green(p - int(p)) if i < p else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.pressure))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 8, 3)))
screen[:text.shape[0], :] += text[:, offset:offset + 8]
return screen.clip(0, 1)
def bounce(it):
# bounce('ABC') --> A B C C B A A B C ...
return cycle(chain(it, reversed(it)))
def switcher(readings):
for transform in cycle((thermometer, hygrometer, barometer)):
for offset, reading in zip(bounce(range(8)), readings):
yield transform(offset, reading)
sleep(0.2)
def main():
with SenseHAT() as hat:
for a in switcher(hat.environ):
hat.screen.array = a
if __name__ == '__main__':
main()
Interactivity!¶
Switching automatically between things is okay, but it would be nicer if we could control the switching with the joystick. For example, we could lay out our screens side-by-side with thermometer at the far left, then hygrometer, then pressure at the far right, and when the user presses left or right we could scroll between the displays.
To do this we just need to refine our switcher
function so that it depends
on both the readings (which it will pass to whatever the current transformation
is), and events from the joystick.
def switcher(events, readings):
screens = {
(thermometer, 'right'): hygrometer,
(hygrometer, 'left'): thermometer,
(hygrometer, 'right'): barometer,
(barometer, 'left'): hygrometer,
}
screen = thermometer
for event, offset, reading in zip(events, bounce(range(8)), readings):
yield screen(offset, reading)
if event is not None and event.pressed:
try:
screen = screens[screen, event.direction]
except KeyError:
break
sleep(0.2)
However, we have a problem: the joystick only yields events when something
happens so if we use this, our display will only update when the joystick emits
an event (because zip()
will only yield a tuple of values when all
iterators it covers have each yielded a value).
Thankfully, there’s a simple solution: the SenseStick.stream
attribute.
When this is set to True
the joystick will immediately yield a value
whenever one is requested. If no event has occurred it will simply yield
None
. So all our script needs to do is remember to set
SenseStick.stream
to True
at the start and everything will work
happily. Just to make the exit a bit prettier we’ll fade the screen to black
too:
def main():
with SenseHAT() as hat:
hat.stick.stream = True
for a in switcher(hat.stick, hat.environ):
hat.screen.array = a
hat.screen.fade_to(array(Color('black')))
Finishing Touches¶
The fade is a nice touch, but it would be nicer if the screens would “slide” between each other. And we’ve still got to add the database output too!
Thankfully this is all pretty easy to arrange. The main
procedure is the
ideal place to handle transitions like fading and sliding; it just needs to be
told when to perform them. The switcher
function can tell it when to do
this by yielding two values: the array to copy to the display, and the
transition animation to perform (if any). While we’re at it, we may as well
move the fade to black to the end of the loop in switcher
.
def switcher(events, readings):
screens = {
(thermometer, 'right'): hygrometer,
(hygrometer, 'left'): thermometer,
(hygrometer, 'right'): barometer,
(barometer, 'left'): hygrometer,
}
screen = thermometer
for event, offset, reading in zip(events, bounce(range(8)), readings):
anim = 'draw'
if event is not None and event.pressed:
try:
screen = screens[screen, event.direction]
anim = event.direction
except KeyError:
yield array(Color('black')), 'fade'
break
yield screen(offset, reading), anim
sleep(0.2)
Now we enhance the main
function to perform various transitions:
def main():
with SenseHAT() as hat:
hat.stick.stream = True
for a, anim in switcher(hat.stick, hat.environ):
if anim == 'fade':
hat.screen.fade_to(a, duration=0.5)
elif anim == 'right':
hat.screen.slide_to(a, direction='left', duration=0.5)
elif anim == 'left':
hat.screen.slide_to(a, direction='right', duration=0.5)
else:
hat.screen.array = a
Finally, we did promise that we’re going to store the data in a database. Ideally, we want a round-robin database for which we can use the excellent rrdtool project (if you wish to understand the rrdtool calls below, I’d strongly recommend reading its documentation). This provides all sorts of facilities beyond just recording the data, including averaging it over convenient time periods and producing good-looking charts of the data.
Note
Unfortunately, the Python 3 bindings for rrdtool don’t appear to be packaged at the moment so we’ll need to install them manually. On Raspbian you can do this like so:
$ sudo apt install rrdtool librrd-dev python3-pip
$ sudo pip3 install rrdtool
On other platforms the pip
command will likely be similar, but the
pre-requisites installed with apt
may well differ.
We’ll add a little code to construct the round-robin database if it doesn’t already exist, then add a tiny amount of code to record readings into the database. The final result (with the lines we’ve added highlighted) is as follows:
# NB: this script is not compatible with py2.x
from pisense import SenseHAT, array, draw_text, image_to_rgb
from colorzero import Color, Red, Green, Blue
from time import time, sleep
from itertools import cycle, chain
import numpy as np
import io
import rrdtool
def thermometer(offset, reading):
t = max(0, min(50, reading.temperature)) / 50 * 64
screen = array([
Color('red') if i < int(t) else
Color('red') * Red(t - int(t)) if i < t else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.temperature))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
def hygrometer(offset, reading):
h = reading.humidity / 100 * 64
screen = array([
Color('#008') if i < int(h) else
Color('#008') * Blue(h - int(h)) if i < h else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text('^^' if reading.humidity > 99 else
str(int(round(reading.humidity))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 0, 3)))
screen[:text.shape[0], :text.shape[1]] += text
return screen.clip(0, 1)
def barometer(offset, reading):
p = (max(950, min(1050, reading.pressure)) - 950) / 100 * 64
screen = array([
Color('green') if i < int(p) else
Color('green') * Green(p - int(p)) if i < p else
Color('black')
for i in range(64)
])
screen = np.flipud(screen)
text = image_to_rgb(draw_text(str(int(round(reading.pressure))),
'small.pil', foreground=Color('gray'),
padding=(0, 0, 8, 3)))
screen[:text.shape[0], :] += text[:, offset:offset + 8]
return screen.clip(0, 1)
def bounce(it):
# bounce('ABC') --> A B C C B A A B C ...
return cycle(chain(it, reversed(it)))
def create_database(database):
try:
rrdtool.create(
database, # Filename of the database
'--no-overwrite', # Don't overwrite the file if it exists
'--step', '5s', # Data will be fed at least every 5 seconds
'DS:temperature:GAUGE:1m:-70:70', # Primary store for temperatures
'DS:humidity:GAUGE:1m:0:100', # Primary store for humidities
'DS:pressure:GAUGE:1m:900:1100', # Primary store for pressures
'RRA:AVERAGE:0.5:5s:1d', # Keep 1 day's worth of full-res data
'RRA:AVERAGE:0.5:5m:1M', # Keep 1 month of 5-minute-res data
'RRA:AVERAGE:0.5:1h:1y', # Keep 1 year of hourly data
'RRA:MIN:0.5:1h:1y', # ... including minimums
'RRA:MAX:0.5:1h:1y', # ... and maximums
'RRA:AVERAGE:0.5:1d:10y', # Keep 10 years of daily data
'RRA:MIN:0.5:1d:10y', # ... including minimums
'RRA:MAX:0.5:1d:10y', # ... and maximums
)
except rrdtool.OperationalError:
pass # file exists; ignore the error
def update_database(database, reading):
data = 'N:{r.temperature}:{r.humidity}:{r.pressure}'.format(r=reading)
rrdtool.update(database, data)
def switcher(events, readings, database='environ.rrd'):
create_database(database)
screens = {
(thermometer, 'right'): hygrometer,
(hygrometer, 'left'): thermometer,
(hygrometer, 'right'): barometer,
(barometer, 'left'): hygrometer,
}
screen = thermometer
last_update = None
for event, offset, reading in zip(events, bounce(range(8)), readings):
anim = 'draw'
if event is not None and event.pressed:
try:
screen = screens[screen, event.direction]
anim = event.direction
except KeyError:
yield array(Color('black')), 'fade'
break
now = time()
if last_update is None or now - last_update > 5:
# Only update the database every 5 seconds
last_update = now
update_database(database, reading)
yield screen(offset, reading), anim
sleep(0.2)
def main():
with SenseHAT() as hat:
hat.stick.stream = True
for a, anim in switcher(hat.stick, hat.environ):
if anim == 'fade':
hat.screen.fade_to(a, duration=0.5)
elif anim == 'right':
hat.screen.slide_to(a, direction='left', duration=0.5)
elif anim == 'left':
hat.screen.slide_to(a, direction='right', duration=0.5)
else:
hat.screen.array = a
if __name__ == '__main__':
main()
Exercise
At the moment, it’s too easy to accidentally exit the script. Can you make the application rotate around the screens (i.e. moving right from the barometer screen takes the user back to the thermometer screen, and vice-versa) and pressed the joystick is required to exit the application?
Finally, let’s whip up a little web-server that we can run alongside the Sense HAT script to allow remote clients to query our environmental data and see some pretty graphs of the history:
import rrdtool
from http.server import HTTPServer, BaseHTTPRequestHandler
from datetime import datetime
from threading import Lock
from pathlib import PurePosixPath
class SensorData():
def __init__(self, db):
self._db = db
self._data = rrdtool.lastupdate(db)
self._images = {}
@property
def date(self):
return self._data['date']
def __format__(self, format_spec):
element, units = format_spec.split(':')
template = """
<div class="sensor">
<h2>{title}</h2>
<span class="reading">{current:.1f}{units}</span>
<img class="recent" src="{element}_recent.svg" />
<img class="history" src="{element}_history.svg" />
</div>
"""
return template.format(
element=element,
title=element.title(),
units=units,
current=self._data['ds'][element])
def image(self, path):
try:
image = self._images[path]
except KeyError:
# generate it
p = PurePosixPath(path)
try:
element, duration = p.stem.split('_', 1)
except ValueError:
raise KeyError(path)
start = {
'recent': '1d',
'history': '1M',
}[duration]
color = {
'temperature': '#FF0000',
'humidity': '#0000FF',
'pressure': '#00FF00',
}[element]
self._images[path] = image = rrdtool.graphv(
'-',
'--imgformat', 'SVG',
'--border', '0',
'--color', 'BACK#00000000', # transparent
'--start', 'now-' + start,
'--end', 'now',
'DEF:v={db}:{element}:AVERAGE'.format(db=self._db, element=element),
'LINE2:v{color}'.format(color=color)
)['image']
return image
class RequestHandler(BaseHTTPRequestHandler):
database = 'environ.rrd'
data = None
index_template = """
<html>
<head>
<title>Sense HAT Environment Sensors</title>
<link href="https://fonts.googleapis.com/css?family=Raleway" rel="stylesheet">
<style>{style_sheet}</style>
</head>
<body>
<h1>Sense HAT Environment Sensors</h1>
<div id="timestamp">{data.date:%A, %d %b %Y %H:%M:%S}</div>
{data:temperature:°C}
{data:humidity:%RH}
{data:pressure:mbar}
<script>
setTimeout(() => location.reload(true), 10000);
</script>
</body>
</html>
"""
style_sheet = """
body {
font-family: "Raleway", sans-serif;
max-width: 700px;
margin: 1em auto;
}
h1 { text-align: center; }
div {
padding: 8px;
margin: 1em 0;
border-radius: 8px;
}
div#timestamp {
font-size: 16pt;
background-color: #bbf;
text-align: center;
}
div.sensor { background-color: #ddd; }
div.sensor h2 {
font-size: 20pt;
margin-top: 0;
padding-top: 0;
float: left;
}
span.reading {
font-size: 20pt;
float: right;
background-color: #ccc;
border-radius: 8px;
box-shadow: inset 0 0 4px black;
padding: 4px 8px;
}
"""
def get_sensor_data(self):
# Keep a copy of the latest SensorData around for efficiency
old_data = RequestHandler.data
new_data = SensorData(RequestHandler.database)
if old_data is None or new_data.date > old_data.date:
RequestHandler.data = new_data
return RequestHandler.data
def do_HEAD(self):
self.do_GET()
def do_GET(self):
if self.path == '/':
self.send_response(301)
self.send_header('Location', '/index.html')
self.end_headers()
elif self.path == '/index.html':
data = self.get_sensor_data()
content = RequestHandler.index_template.format(
style_sheet=RequestHandler.style_sheet,
data=data).encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', len(content))
self.send_header('Last-Modified', self.date_time_string(
data.date.timestamp()))
self.end_headers()
self.wfile.write(content)
elif self.path.endswith('.svg'):
data = self.get_sensor_data()
try:
content = data.image(self.path)
except KeyError:
self.send_error(404)
else:
self.send_response(200)
self.send_header('Content-Type', 'image/svg+xml')
self.send_header('Content-Length', len(content))
self.end_headers()
self.wfile.write(content)
else:
self.send_error(404)
def main():
httpd = HTTPServer(('', 8000), RequestHandler)
httpd.serve_forever()
if __name__ == '__main__':
main()
Run this alongside the monitor script, make sure your Pi is accessible on your local network and then visit http://your-pis-address-here:8000/ in a web-browser.
Note
We could have added this to the monitor script, but frankly there’s no point as rrdtool includes all the locking we need to have something reading the database while something else writes to it. This also ensures that a bug in one script doesn’t affect the operation of the other, and means web requests are far less likely to affect the operation of the Sense HAT interface.
Auto-start¶
This is the sort of application it would be nice to start automatically upon
boot up. Thankfully, this is easy to arrange with a few systemd files.
Create the following under /etc/systemd/system/monitor_app.service
:
[Unit]
Description=An environment monitoring application
After=local-fs.target
[Service]
ExecStart=/usr/bin/python3 /home/pi/monitor_final.py
WorkingDirectory=/home/pi
User=pi
[Install]
WantedBy=multi-user.target
Note
You’ll need to modify the path for ExecStart
to point to the location
of your monitor_final.py
script. You may want to modify
WorkingDirectory
too if you want the database to be stored in another
location.
Then for the web-service (if you want it), create the following under
/etc/systemd/system/monitor_web.service
:
[Unit]
Description=Web server for the environment monitoring application
After=local-fs.target network.target
[Service]
ExecStart=/usr/bin/python3 /home/pi/monitor_server.py
WorkingDirectory=/home/pi
User=pi
[Install]
WantedBy=multi-user.target
Note
Remember to modify ExecStart
(and optionally WorkingDirectory
) as
above.
Finally, inform systemd of the changes and tell it we want to start these new services on boot-up as follows. For example, the following commands might be used to achieve all of this:
$ cd /home/pi
$ nano monitor_app.service
$ nano monitor_web.service
$ sudo cp monitor_*.service /etc/systemd/system/
$ sudo systemctl daemon-reload
$ sudo systemctl enable monitor_app
$ sudo systemctl enable monitor_web
To start the services immediately:
$ sudo systemctl start monitor_app
$ sudo systemctl start monitor_web
To stop the services immediately:
$ sudo systemctl stop monitor_app
$ sudo systemctl stop monitor_web
If you want to disable these from starting at boot time you can simply run the following commands:
$ sudo systemctl disable monitor_app
$ sudo systemctl disable monitor_web
Naturally, you could disable the web service but leave the main application running too.
Project: Maze Game¶
Here’s another project for the Sense HAT that involves building a full maze solving game. Initially this will be controlled with the joystick (because it’s easier for debugging), but at the end we’ll switch to use the IMU to roll the ball through the maze.
Let’s start at a high level and work our way down. We’ll construct the application in the same manner as our earlier demos: a transformation of inputs (initially from the joystick, later from the IMU) into a series of screens to be shown on the display.
First some design points:
- The state of our maze can be represented as a large numpy array (larger than the screen anyway) which we’ll slice to show on the display.
- We’ll need:
- a color to represent walls (white)
- a color to represent unvisited spaces (black)
- a color to represent visited spaces (green)
- a color to represent the player’s position (red)
- a color to represent the goal (yellow)
- We’ll also need:
- a function to generate the maze
- (possibly) a function to draw the generated maze as a numpy array
- a transformation to convert joystick events / IMU readings into X+Y motion
- a transformation to convert motions into new display states (essentially this is the “game logic”)
- a function to render the display states including any requested animations (just like in the final monitor script previously)
Let’s start from the “top level” and work our way down. First, the imports:
import numpy as np
import pisense as ps
from random import sample
from colorzero import Color
from time import sleep
Our “main” function will define the colors we need, call a function to generate the maze, set up the motion transformation, the game transformation, and feed all this to the display renderer:
def main():
width = height = 8
colors = {
'unvisited': Color('black'),
'visited': Color('green'),
'wall': Color('white'),
'ball': Color('red'),
'goal': Color('yellow'),
}
with ps.SenseHAT() as hat:
maze = generate_maze(width, height, colors)
inputs = moves(hat.stick)
outputs = game(maze, colors, inputs)
display(hat.screen, outputs)
You may recall from our earlier demos (specifically Joystick Movement) that we had a neat little function that converted joystick events into X and Y delta values. Let’s copy that in next:
def moves(stick):
for event in stick:
if event.pressed:
try:
delta_y, delta_x = {
'left': (0, -1),
'right': (0, 1),
'up': (-1, 0),
'down': (1, 0),
}[event.direction]
yield delta_y, delta_x
except KeyError:
break
So far, this may look rather strange! What does it mean to call a generator function like “moves” without a for loop? Quite simply: this creates an instance of the generator but doesn’t start evaluating it until it’s used in a loop. In other words nothing in the generator function will run … yet. The same goes for the “game” function which will also be a generator, looping over the movements yielded from “moves” and yielding screens for “display” to deal with.
Speaking of “display”, that should be easy enough to deal with. It’ll be a slightly expanded version of what we used in the previous monitor example with additional cases for zooming and scrolling text:
def display(screen, states):
try:
for anim, data in states:
if anim == 'fade':
screen.fade_to(data)
elif anim == 'zoom':
screen.zoom_to(data)
elif anim == 'show':
screen.array = data
elif anim == 'scroll':
screen.scroll_text(data, background=Color('red'))
else:
assert False
finally:
screen.fade_to(ps.array(Color('black')))
Now onto the game logic itself. Let’s assume that the player always starts at the top left (which will be (1, 1) given that (0, 0) will be an external wall) and must finish at the bottom right. We’ll assume the maze generator handles drawing the maze, including the goal, for us and we just need to handle drawing the player’s position and updating where the player has been.
We’ll handle reacting to motion from the “moves” generator, preventing the player from crossing walls (by checking the position they want to move to doesn’t have the “wall” color), and noticing when they’ve reached the goal (likewise by checking the color of the position they want to move to):
def game(maze, colors, moves):
height, width = maze.shape
y, x = (1, 1)
maze[y, x] = colors['ball']
left, right = clamp(x, width)
top, bottom = clamp(y, height)
yield 'fade', maze[top:bottom, left:right]
for delta_y, delta_x in moves:
if Color(*maze[y + delta_y, x + delta_x]) != colors['wall']:
maze[y, x] = colors['visited']
y += delta_y
x += delta_x
if Color(*maze[y, x]) == colors['goal']:
yield from winners_cup()
break
else:
maze[y, x] = colors['ball']
left, right = clamp(x, width)
top, bottom = clamp(y, height)
yield 'show', maze[top:bottom, left:right]
yield 'fade', ps.array(Color('black'))
In the function above we’ve assumed the existence of two extra functions:
- “clamp” which, given a position (either the user’s current X or Y coordinate) and a limit (the width or height of the maze), returns the lower and upper bounds we should display (on the fixed 8x8 LEDs).
- “winners_cup” which will provide some fancy “You’ve won!” sort of animation.
This is called with
yield from
which is equivalent to iterating over it and yielding each result.
Let’s construct “clamp” first as it’s pretty easy:
def clamp(pos, limit, window=8):
low, high = pos - window // 2, pos + window // 2
if low < 0:
high += -low
low = 0
elif high > limit:
low -= high - limit
high = limit
return low, high
Now let’s code some fancy animation for a user that’s won. We’ll zoom in to a golden cup on a red background, fade to red, and scroll “You win!” across the display:
def winners_cup():
r = Color('red')
y = Color('yellow')
W = Color('white')
yield 'zoom', ps.array([
r, r, W, y, y, y, r, r,
r, r, W, y, y, y, r, r,
r, r, W, y, y, y, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, W, y, y, y, r, r,
])
sleep(2)
yield 'fade', ps.array(r)
yield 'scroll', 'You win!'
Note
Not all generator functions need a loop in them!
Nearly there … now we’ve just got to generate the maze. There’s lots of ways of doing this but about the simplest is Kruskal’s Algorithm. Roughly speaking, it works like this:
Start off assuming the maze has walls between every cell on every side:
Construct a set of sets (S) each of which represents an individual cell, and the set of walls between them (W). Below we represent a wall as a set giving the cells it divides (there are more efficient representations, but this is easier to visualize). Note that we are only interested in walls dividing cells, not the exterior walls or walls that divide diagonally:
Knock down a random wall where the cells either side of the wall don’t belong to the same set in S, and union together the sets in S containing the cells that have just been joined.
Continue doing this until a single set remains in S, containing all cells. At this point any cell must be reachable from any other cell and the maze is complete; W will contain the set of walls that need to be rendered:
Here’s the implementation, with the actual drawing of the maze split out into its own function:
def generate_maze(width, height, colors):
walls = generate_walls(width, height)
maze = ps.array(shape=(2 * height + 1, 2 * width + 1))
maze[...] = colors['unvisited']
maze[::2, ::2] = colors['wall']
for a, b in walls:
ay, ax = a
by, bx = b
y = 2 * by + 1
x = 2 * bx + 1
if ay == by:
maze[y, x - 1] = colors['wall']
else:
maze[y - 1, x] = colors['wall']
maze[0, :] = maze[:, 0] = colors['wall']
maze[-1, :] = maze[:, -1] = colors['wall']
maze[-2, -2] = colors['goal']
return maze
def generate_walls(width, height):
# Generate the maze with Kruskal's algorithm (there's better
# choices, but this is a simple demo!)
sets = {
frozenset({(y, x)})
for y in range(height)
for x in range(width)
}
walls = set()
for y in range(height):
for x in range(width):
if x > 0:
# Add west wall
walls.add(((y, x - 1), (y, x)))
if y > 0:
# Add north wall
walls.add(((y - 1, x), (y, x)))
for wall in sample(list(walls), k=len(walls)):
# For a random wall, find the sets containing the adjacent cells
a, b = wall
set_a = set_b = None
for s in sets:
if {a, b} <= s:
set_a = set_b = s
elif a in s:
set_a = s
elif b in s:
set_b = s
if set_a is not None and set_b is not None:
break
# If the sets aren't the same, the cells aren't reachable;
# remove the wall between them
if set_a is not set_b:
sets.add(set_a | set_b)
sets.remove(set_a)
sets.remove(set_b)
walls.remove(wall)
if len(sets) == 1:
break
assert len(sets) == 1
assert sets.pop() == {
(y, x)
for y in range(height)
for x in range(width)
}
return walls
At this point we should have a fully functioning maze game that looks quite
pretty. You can play it simply by running main()
. Once you’ve verified it
works, it’s a simple matter to switch out the joystick for the IMU (in exactly
the same manner as in Simple Demos). Here’s the updated moves
function
which queries the IMU instead of the joystick:
def moves(imu):
for reading in imu:
delta_x = int(round(max(-1, min(1, reading.accel.x))))
delta_y = int(round(max(-1, min(1, reading.accel.y))))
if delta_x != 0 or delta_y != 0:
yield delta_y, delta_x
sleep(1/10)
Finally, it would be nice to have the game run in a loop so that after the winners screen it resets with a new maze. It would also be nice to launch the script on boot so we can turn the Pi into a hand-held game. This is also simple to arrange:
- We need to put an infinite loop in
main
to restart the game when it finishes - We need to add a signal handler to shut down the game nicely when systemd
tells it to stop (which it does by sending the SIGTERM signal; we can handle
this with some simple routines from the built-in
signal
module).
Here’s the final listing with the updated lines highlighted:
import numpy as np
import pisense as ps
from random import sample
from colorzero import Color
from time import sleep
from signal import signal, SIGTERM
def sigterm(signum, frame):
raise SystemExit(0)
def main():
signal(SIGTERM, sigterm)
width = height = 8
colors = {
'unvisited': Color('black'),
'visited': Color('green'),
'wall': Color('white'),
'ball': Color('red'),
'goal': Color('yellow'),
}
with ps.SenseHAT() as hat:
while True:
maze = generate_maze(width, height, colors)
inputs = moves(hat.imu)
outputs = game(maze, colors, inputs)
display(hat.screen, outputs)
def moves(imu):
for reading in imu:
delta_x = int(round(max(-1, min(1, reading.accel.x))))
delta_y = int(round(max(-1, min(1, reading.accel.y))))
if delta_x != 0 or delta_y != 0:
yield delta_y, delta_x
sleep(1/10)
def display(screen, states):
try:
for anim, data in states:
if anim == 'fade':
screen.fade_to(data)
elif anim == 'zoom':
screen.zoom_to(data)
elif anim == 'show':
screen.array = data
elif anim == 'scroll':
screen.scroll_text(data, background=Color('red'))
else:
assert False
finally:
screen.fade_to(ps.array(Color('black')))
def game(maze, colors, moves):
height, width = maze.shape
y, x = (1, 1)
maze[y, x] = colors['ball']
left, right = clamp(x, width)
top, bottom = clamp(y, height)
yield 'fade', maze[top:bottom, left:right]
for delta_y, delta_x in moves:
if Color(*maze[y + delta_y, x + delta_x]) != colors['wall']:
maze[y, x] = colors['visited']
y += delta_y
x += delta_x
if Color(*maze[y, x]) == colors['goal']:
yield from winners_cup()
break
else:
maze[y, x] = colors['ball']
left, right = clamp(x, width)
top, bottom = clamp(y, height)
yield 'show', maze[top:bottom, left:right]
yield 'fade', ps.array(Color('black'))
def generate_maze(width, height, colors):
walls = generate_walls(width, height)
maze = ps.array(shape=(2 * height + 1, 2 * width + 1))
maze[...] = colors['unvisited']
maze[::2, ::2] = colors['wall']
for a, b in walls:
ay, ax = a
by, bx = b
y = 2 * by + 1
x = 2 * bx + 1
if ay == by:
maze[y, x - 1] = colors['wall']
else:
maze[y - 1, x] = colors['wall']
maze[0, :] = maze[:, 0] = colors['wall']
maze[-1, :] = maze[:, -1] = colors['wall']
maze[-2, -2] = colors['goal']
return maze
def generate_walls(width, height):
# Generate the maze with Kruskal's algorithm (there's better
# choices, but this is a simple demo!)
sets = {
frozenset({(y, x)})
for y in range(height)
for x in range(width)
}
walls = set()
for y in range(height):
for x in range(width):
if x > 0:
# Add west wall
walls.add(((y, x - 1), (y, x)))
if y > 0:
# Add north wall
walls.add(((y - 1, x), (y, x)))
for wall in sample(list(walls), k=len(walls)):
# For a random wall, find the sets containing the adjacent cells
a, b = wall
set_a = set_b = None
for s in sets:
if {a, b} <= s:
set_a = set_b = s
elif a in s:
set_a = s
elif b in s:
set_b = s
if set_a is not None and set_b is not None:
break
# If the sets aren't the same, the cells aren't reachable;
# remove the wall between them
if set_a is not set_b:
sets.add(set_a | set_b)
sets.remove(set_a)
sets.remove(set_b)
walls.remove(wall)
if len(sets) == 1:
break
assert len(sets) == 1
assert sets.pop() == {
(y, x)
for y in range(height)
for x in range(width)
}
return walls
def clamp(pos, limit, window=8):
low, high = pos - window // 2, pos + window // 2
if low < 0:
high += -low
low = 0
elif high > limit:
low -= high - limit
high = limit
return low, high
def winners_cup():
r = Color('red')
y = Color('yellow')
W = Color('white')
yield 'zoom', ps.array([
r, r, W, y, y, y, r, r,
r, r, W, y, y, y, r, r,
r, r, W, y, y, y, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, r, W, y, r, r, r,
r, r, W, y, y, y, r, r,
])
sleep(2)
yield 'fade', ps.array(r)
yield 'scroll', 'You win!'
if __name__ == '__main__':
main()
Now to launch the game on boot, we’ll create a systemd service to execute it
under the unprivileged “pi” user. Copy the following into
/etc/systemd/system/maze.service
:
[Unit]
Description=The Sense HAT Maze IMU game
After=local-fs.target
[Service]
ExecStart=/usr/bin/python3 /home/pi/maze_final.py
User=pi
[Install]
WantedBy=multi-user.target
Note
You’ll need to modify the path for ExecStart
to point to the location
of your maze_final.py
script.
Finally, run the following command line to enable the service on boot:
$ sudo systemctl enable maze
If you ever wish to stop the script running on boot:
$ sudo systemctl disable maze
Frequently Asked Questions (FAQ)¶
Feel free to ask the author, or add questions to the issue tracker on GitHub, or even edit this document yourself and add frequently asked questions you’ve seen on other forums!
Why?¶
To be rather blunt, I’m not a fan of the Sense HAT’s official API. This probably sounds a bit strange coming from someone who played a small part in making it (I wrote the joystick handling side of it, and later the desktop Sense HAT emulator)! Originally pisense was my attempt, back when the Sense HAT was relatively new, to design an API the way I wanted. It was a rough experiment and I didn’t want to “pollute” the space by offering a competing API to the official one, so I left it as just that: an experiment available from my GitHub pages, but not properly documented, tested, or packaged.
Over the years, I’ve wanted to actually use the Sense HAT in a few applications and each time I’ve tried, I’ve found myself frustrated by the inconsistencies or short-comings in the official API. Eventually that came to a head and I decided to pull pisense out of storage and polish it up for serious use (I considered including it statically in applications I built, but that seemed ugly).
To be clear: this is not an attempt to supplant the official API. If you’re a teacher in education you’re almost certainly better off with the official API. All the learning resources are built for it, the community support is there for it, and it’s the only API accepted for the fabulous Astro Pi mission. Stop reading this and go learn that one.
You still haven’t answered why…¶
All the teachers gone? Okay. I don’t want to put you off using the official API, but here’s what I don’t like about it:
- It pulls in numpy as a dependency. So does pisense, but we actually use it for more than rotating the display (seriously, that’s all the official API uses it for). Why pull in numpy (a huge dependency) and then not use its signature class (an n-dimensional array) for your two dimensional display?
- It pulls in PIL as a dependency. Again, so does pisense, but we use it for a little more than a single method which just loads images for display. How about presenting the display as a PIL image for manipulation? Or using the drawing and scaling capabilities for animation? Font support for text display? Oh, and our image conversions don’t rely on nested lists …
- Fixed width fonts for scrolling text? Urgh.
- The stick interface (yes, the one I wrote …) isn’t bad, but it’s not
great. The real stroke of genius in pisense (which sadly I can’t take
credit for: yet again, it was one of Ben Nuttall’s fabulous notions) was
separating
held
into its own value in theStickEvent
tuple so that release events can tell if the button was previously held. - Everything is conflated into a single class (except the joystick) so if you don’t want certain functionality: tough, you still have to deal with all the initialization and memory usage for it (okay, that’s just a nitpick really).
- Tons of duplicated ways of doing things. I want the temperature; do I call
the
get_temperature()
method, or theget_temperature_from_humidity()
method, or query thetemperature
property, or thetemp
property? Actually it doesn’t matter; they all do the same thing (callget_temperature_from_humidity()
). - Several limitations in the API. I want both the raw accelerometer readings
(in g, because degrees really are useless for that) and the magnetometer
readings. The only way to do this is to query
accel_raw
andcompass_raw
(or call their duplicated methods). However, under the covers this causes two separate IMU reads with all the attendant overhead and inconsistency that implies. There’s no way to get this set of data from a single IMU read.
I’m not intending this to be the simplest interface to the Sense HAT. The official API is probably easier to get going with. My feeling is that I’d prefer an API that was a little harder to get started with if it allowed me more scope to “get things done”.
Why are you using single precision floats in the display?!¶
Under the covers, the Sense HAT’s display framebuffer stores pixel information in RGB565 format. That’s 5-bits for red and blue, and 6-bits for green. The 32-bit single-precision floating point format used in pisense still uses 23-bits for the mantissa; more than enough to represent the 5 or 6-bits of data for each pixel.
Why not use RGB565 directly? We do: the SenseScreen.raw
attribute
provides an array backed by the actual framebuffer in RGB565 format, if you
really want the fastest, lowest level access.
However, for ease of use I wanted the array format to be compatible with my colorzero library, which meant using a floating point format. The smaller the format, the more efficient the library as there’s less data to chuck around and crunch (ideally I wanted it to perform reasonably on the smallest Pi platforms like the old A+). During development, this library used the rather obscure half-precision floating point format which is only 16-bits in size (and provides 11-bits for the mantissa). However, hardware support for this floating point format is only present on some Pi models and as best as I can tell isn’t supported at all in Raspbian’s 32-bit userland. In tests, the single precision format turned out to be the fastest so that’s what the library uses.
Why are orientation and gyroscopic values in radians, not degrees?¶
Firstly, there’s routines built into Python’s standard library for conversion so this is trivial to achieve without the library duplicating it. However, the more important reason is not to clutter the API with unnecessary attributes.
Degrees are probably simpler to look at as pure values, but they’re
considerably less useful to use in practice. This is because almost every
routine you are likely to use these values with (all trigonometric routines for
instance), only accept radians. This is why the repr()
of the orientation
includes degree values (because they’re useful values to “eyeball”) but the
actual class doesn’t include such values.
If it did, I’d likely name them things like roll_degrees
at which point
you’re typing almost as much as degrees(roll)
anyway!
Can I use this with the Sense HAT emulator?¶
Yes; see the Sense HAT Emulator section.
Sense HAT Emulator¶
The pisense library is compatible with the desktop Sense HAT emulator,
however it uses a slightly different method of specifying that the emulator
should be used instead of the “real” HAT. You can construct the
SenseHAT
class passing True
as the value of the emulate
parameter:
from pisense import SenseHAT
hat = SenseHAT(emulate=True)
However, the default value of emulate is taken from an environment variable:
PISENSE_EMULATE
. This means an even easier way (which doesn’t require
modifying your script at all) is to simply run your script after setting that
variable. For example:
$ python my_script.py # run on the "real" HAT
$ PISENSE_EMULATE=1 python my_script.py # run on the emulator
If you are going to be working with the emulator primarily (e.g. if you’re
not working on a Pi), you may wish to add the following line to your
~/.bashrc
script so that all scripts default to using the emulator:
export PISENSE_EMULATE=1
If the emulator is not detected when SenseHAT
is constructed, and
emulate is either True
or defaults to True
because of the environment
variable, the emulator will be launched.
Note
The emulator referred to here is the desktop Sense HAT emulator, not the excellent online emulator developed by Trinket. Unfortunately as pisense relies on both numpy and PIL, it’s unlikely pisense can be easily ported to this.
Development¶
The main GitHub repository for the project can be found at:
Anyone is more than welcome to open tickets to discuss bugs, new features, or just to ask usage questions (I find this useful for gauging what questions ought to feature in the FAQ, for example).
Even if you don’t feel up to hacking on the code, I’d love to hear suggestions from people of what you’d like the API to look like!
Development installation¶
If you wish to develop pisense itself, it is easiest to obtain the source by cloning the GitHub repository and then use the “develop” target of the Makefile which will install the package as a link to the cloned repository allowing in-place development (it also builds a tags file for use with vim/emacs with Exuberant’s ctags utility, and links the Sense HAT’s customized RTIMULib into your virtual environment if it can find it). The following example demonstrates this method within a virtual Python environment:
$ sudo apt install lsb-release build-essential git git-core \
exuberant-ctags virtualenvwrapper python-virtualenv python3-virtualenv
$ cd
$ mkvirtualenv -p /usr/bin/python3 pisense
$ workon pisense
(pisense) $ git clone https://github.com/waveform80/pisense.git
(pisense) $ cd pisense
(pisense) $ make develop
To pull the latest changes from git into your clone and update your installation:
$ workon pisense
(pisense) $ cd ~/pisense
(pisense) $ git pull
(pisense) $ make develop
To remove your installation, destroy the sandbox and the clone:
(pisense) $ deactivate
$ rmvirtualenv pisense
$ rm -fr ~/pisense
Building the docs¶
If you wish to build the docs, you’ll need a few more dependencies. Inkscape is used for conversion of SVGs to other formats, Graphviz is used for rendering certain charts, and TeX Live is required for building PDF output. The following command should install all required dependencies:
$ sudo apt install texlive-latex-recommended texlive-latex-extra \
texlive-fonts-recommended graphviz inkscape
Once these are installed, you can use the “doc” target to build the documentation:
$ workon pisense
(pisense) $ cd ~/pisense
(pisense) $ make doc
The HTML output is written to build/html
while the PDF output goes to
build/latex
.
Test suite¶
If you wish to run the pisense test suite, follow the instructions in Development installation above and then make the “test” target within the sandbox:
$ workon pisense
(pisense) $ cd ~/pisense
(pisense) $ make test
API - The Sense HAT¶
The pisense
module is the main namespace for the pisense package; it
imports (and exposes) all publically accessible classes, functions, and
constants from all the modules beneath it for convenience. It also defines
the top-level SenseHAT
class.
SenseHAT¶
Warnings¶
API - Screen¶
The screen interface is by far the most extensive and complex part of the
pisense library, comprising several classes and numerous functions to handle
representing the screen in a variety of conveniently manipulated formats, and
generation of slick animations. The two most important elements are the main
SenseScreen
class itself, and the ScreenArray
class which is
used to represent the contents of the display.
SenseScreen¶
Animation functions¶
The following animation generator functions are used internally by the
animation methods of SenseScreen
. They are also provided as separate
generator functions to permit users to build up complex sequences of
animations, or to aid in generating other effects like interspersing frames
with other sequences.
Each function is a generator function which yields an Image
for each frame of the animation.
Easing functions¶
The easing functions are used with the animation functions above for their easing parameters.
An easing function must take a single integer parameter indicating the number of frames in the resulting animation. It must return a sequence of (or generator which yields) floating point values between 0.0 (which indicates the start of the animation) and 1.0 (which indicates the end of the animation). How fast the value moves from 0.0 to 1.0 dictates how fast the animation progresses from frame to frame.
Several typical easing functions are provided by the library, but you are free to use any function which complies which this interface. The default easing function is always linear:
Gamma tables¶
Two built-in gamma tables are provided which can be assigned to
SenseScreen.gamma
. However, you are free to use any compatible list of
32 values.
-
pisense.
DEFAULT_GAMMA
¶ The default gamma table, which can be assigned directly to
gamma
. The default rises in a steady curve from 0 (off) to 31 (full brightness).
-
pisense.
LOW_GAMMA
¶ The “low light” gamma table, which can be assigned directly to
gamma
. The low light table rises in a steady curve from 0 (off) to 10.
API - Screen Arrays¶
This chapter covers the ScreenArray
class, how it should be
constructed, how it can be used to manipulate the Sense HAT’s display, and how
to convert it to various different formats.
ScreenArray Class¶
-
class
pisense.
ScreenArray
(shape=(8, 8))¶ The
ScreenArray
class is a descendant ofndarray
with customizations to make working with the Sense HAT screen a little easier.In most respects, a
ScreenArray
will act like any other numpy array. Exceptions to the normal behaviour are documented in the following sections.Instances of this class should not be created directly. Rather, obtain the current state of the screen from the
array
attribute ofSenseHAT.screen
or use thearray()
function to create a new instance from a variety of sources (a PILImage
, another array, a list ofColor
instances, etc).
Display Association¶
If the ScreenArray
instance was obtained from the
array
attribute of SenseHAT.screen
it will be
“associated” with the display. Manipulating the content of the array will
manipulate the appearance of the display on the Sense HAT:
>>> from pisense import *
>>> hat = SenseHAT()
>>> arr = hat.screen.array
>>> arr[0, 0] = (1, 0, 0) # set the top-left pixel to red
Copying an array that is associated with a display (via the
copy()
method) breaks the association. This is a
convenient way to take a copy of the current display, fiddle around with it
without intermediate states displaying, and then update the display by copying
it back:
>>> from pisense import *
>>> hat = SenseHAT()
>>> arr = hat.screen.array.copy()
>>> arr[4:, :] = (1, 0, 1) # HAT's pixels are *not* changed (yet)
>>> hat.screen.array = arr # HAT's bottom pixels are changed to purple
Operations in numpy that create a new array will also break the display association (e.g. adding two arrays together to create a new array; the new array will not “derive” its display association from the original arrays). However, operations that don’t create a new array (e.g. slicing, flipping, etc.) will normally maintain the association. This is why you can update portions of the display using slices.
Data Type¶
The data-type of the array is fixed and cannot be altered. Specifically the data-type is a triple of single-precision floating point values between 0.0 and 1.0, labelled “r”, “g” and “b”. In other words, each element of the array is a triple RGB value representing the color of a single pixel.
The 0.0 to 1.0 range of color values is not enforced. Hence if you add two screen arrays together you may wind up with values greater than 1.0 or less than 0.0 in one or more color planes. This is deliberate as intermediate values exceeding this range can be useful in some calculations.
Hint
The numpy clip()
method is a convenient way of
limiting values to the 0.0 to 1.0 range before updating the display.
Previews¶
While you can see the state of the HAT’s array visually, what about arrays that
you create separately with the array()
function? For this, the
show()
method is provided:
>>> from pisense import *
>>> arr = array(draw_text('Hello!'))
>>> arr.show()
██ ██ ████ ████ ██
██ ██ ██ ██ ██
██ ██ ██████ ██ ██ ██████ ██
██████████ ██ ██ ██ ██ ██ ██ ██
██ ██ ██████████ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██ ██
██ ██ ██████ ██████ ██████ ██████ ██
>>> arr.show('##', width=16, overflow='$')
$
## ## $
## ## $
## ## $
########## ##$
## ## ##$
## ## ##$
## ## $
>>> arr[:8, :8].show()
██ ██
██ ██
██ ██ ██
██████████ ██
██ ██ ████
██ ██ ██
██ ██ ██
Note that the method is not limited to the size of the Sense HAT’s screen, which makes it useful for previewing constructions that you intend to slice for display later.
-
ScreenArray.
show
(element='\u2588\u2588', colors=None, width=None, overflow='\u00BB')¶ Print a preview of the screen to the console.
The element parameter specifies the string used to represent each element of the display. This defaults to “██” (two Unicode full block drawing characters) which is usually sufficient to provide a fairly accurate representation of the screen.
The colors parameter indicates the sort of ANSI coding (if any) that should be used to depict the colors of the display. The following values are accepted:
Value Description 16m Use true-color ANSI codes capable of representing ≈16 million colors. This is the default if stdout is a TTY. The default terminal in Raspbian supports this style of ANSI code. 256 Use 256-color ANSI codes. Most modern terminals (including Raspbian’s default terminal) support this style of ANSI code. 8 Use old-style DOS ANSI codes, only capable of representing 8 colors. There is rarely a need to resort to this setting. 0 Don’t use ANSI color codes. Instead, any pixel values with a brightness >25% (an arbitrary cut-off) will be displayed, while darker pixels will be rendered as spaces. This is the default if stdout is not a TTY. The width parameter specifies the maximum width for the output. This defaults to
None
which means the method will attempt to use the terminal’s width (if this can be determined; if it cannot, then 80 will be used as a fallback). Pixels beyond the specified width will be excluded from the output, and a column of overflow strings will be shown to indicate that horizontal truncation has occurred in the output.
Format Strings¶
Screen arrays can also be used in format strings to return the string that the
show()
method would print. The format string specification
for screen arrays consists of colon-separated sections (in any order):
- A section prefixed with “e” specifies the string used to represent an
individual element of the display. This defaults to ██ (two filled Unicode
block characters, which usually represents the display fairly accurately),
and is equivalent to the element parameter of
show()
. - A section prefixed with “o” specifies the string used to represent horizontal overflow (equivalent to the overflow parameter). When the string will be longer than the specified width (or the terminal width if none is given), it will be truncated and the overflow string displayed at the right.
- A section prefixed with “w” specifies the maximum width that the rendered array can take up in character widths (equivalent to the width parameter). Note that ANSI color codes (which render with zero width) will not count towards this limit, so each line returned may be longer than the specified width but shouldn’t render longer than this. The default is the width of the terminal, if it can be detected, or 80 columns otherwise.
- A section prefixed with “c” specifies the style of ANSI color codes to use in
the output (equivalent to the colors parameter). If unspecified, full
true-color ANSI codes will be used if the terminal is detected to be a TTY.
Otherwise, no ANSI codes will be used and elements will only be rendered if
their lightness exceeds 1/4 (an arbitrary cut-off which seems to work
tolerably well in practice). See the
show()
method for more information on valid values for this parameter.
Some examples of operation:
>>> from pisense import *
>>> arr = array(draw_text('Hello!'))
>>> print('{}'.format(arr))
██ ██ ████ ████ ██
██ ██ ██ ██ ██
██ ██ ██████ ██ ██ ██████ ██
██████████ ██ ██ ██ ██ ██ ██ ██
██ ██ ██████████ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██ ██
██ ██ ██████ ██████ ██████ ██████ ██
>>> print('{:e#:c0}'.format(arr))
# # ## ## #
# # # # #
# # ### # # ### #
##### # # # # # # #
# # ##### # # # # #
# # # # # # #
# # ### ### ### ### #
>>> print('{:e#:o$:w16}'.format(arr))
$
# # ## $
# # # $
# # ### # $
##### # # # $
# # ##### # $
# # # # $
# # ### ###$
>>> print('{:e##:o$:w16}'.format(arr))
$
## ## $
## ## $
## ## $
########## ##$
## ## ##$
## ## ##$
## ## $
Note
The last example demonstrates that elements will never be chopped in half by the truncation; either a display element is included in its entirety or not at all.
A more formal description of the format string specification for
ScreenArray
would be as follows:
<format_spec> ::= <format_part> (":" <format_part>)*
<format_part> ::= (<elements> | <overflow> | <colors> | <width>)
<elements> ::= "e" <any characters except : or {}>+
<overflow> ::= "o" <any characters except : or {}>+
<colors> ::= "c" ("0" | "8" | "256" | "16m")
<width> ::= "w" <digit>+
<digit> ::= "0"..."9"
Format conversions¶
The following conversion functions are provided to facilitate converting various inputs into something either easy to manipulate or easy to display on the screen.
Advanced conversions¶
The following conversion functions are used internally by pisense, and are
generally not required unless you want to work with SenseScreen.raw
directly, or you know exactly what formats you are converting between and want
to skip the overhead of the buf_to_*
routines figuring out the input type.
API - Joystick¶
The joystick on the Sense HAT is an excellent tool for providing a user
interface on Pis without an attached keyboard. The SenseStick
class
provides several different paradigms for programming such an interface:
- At its simplest, you can poll the state of the joystick with various
attributes like
SenseStick.up
. - You can use event-driven programming by assigning handlers to attributes
like
SenseStick.when_up
. - You can also treat the joystick like an iterable and write transformations the convert events into other useful outputs.
SenseStick¶
StickEvent¶
Warnings¶
API - Environment Sensors¶
The Sense HAT has two environment sensors: a humidity sensor and a pressure
sensor, which are exposed in the combined SenseEnviron
class. This
provides readings as EnvironReadings
tuples.
SenseEnviron¶
EnvironReadings¶
Temperature Configuration¶
API - Inertial Measurement Unit (IMU)¶
The Inertial Measurement Unit (IMU) on the Sense HAT has myriad uses in all
sorts of projects from High Altitude Balloon (HAB) flights, robotics,
detecting magnetic fields, or making novel user interfaces. It is represented
in pisense by the SenseIMU
class, and provides readings as
IMUState
, IMUVector
and IMUOrient
tuples.
SenseIMU¶
IMUState¶
IMUVector¶
IMUOrient¶
SenseSettings¶
Change log¶
Release 0.1 (2018-07-19)¶
Initial release. Please note that as this is a pre-v1 release, API backwards
compatibility is not yet guaranteed. I’m mostly happy with the API but for
some subtle aspects of the ScreenArray
class. Hence if anything’s
going to change it’s probably going to be there. Feedback welcome!
License¶
Copyright 2015-2018 Dave Jones
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ Sort by:\ best rated\ newest\ oldest\
\\
Add a comment\ (markup):
\``code``
, \ code blocks:::
and an indented block after blank line