After much deliberation, I decided to do the migration to a new platform, hosted for free in my `github`. It involves a great deal of hacking, but in spite of all the extra work, the move is totally worthwhile.

Please, redirect your browser to blancosilva.github.io for my new professional blog.

]]>To illustrate a few advantages of the `scipy` stack in one of my upcoming talks, I have placed an `ipython` notebook with (a reduced version of) the current draft of Chapter 6 (Computational Geometry) of my upcoming book: **Mastering SciPy**.

The raw `ipynb` can be downloaded from my github repository [blancosilva/Mastering-Scipy/], or viewed directly from the nbviewer at [this other link]

I also made a selection with some fun examples for the talk. You can download the presentation by clicking in the image above.

Enjoy!

]]>On Tuesday, September 8th 1857, the steamboat SS Central America left Havana at 9 AM for New York, carrying about 600 passengers and crew members. Inside of this vessel, there was stowed a very precious cargo: a set of manuscripts by John James Audubon, and three tons of gold bars and coins. The manuscripts documented an expedition through the yet uncharted southwestern United States and California, and contained 200 sketches and paintings of its wildlife. The gold, fruit of many years of prospecting and mining during the California Gold Rush, was meant to start anew the lives of many of the passengers aboard.

On the 9th, the vessel ran into a storm which developed into a hurricane. The steamboat endured four hard days at sea, and by Saturday morning the ship was doomed. The captain arranged to have women and children taken off to the brig Marine, which offered them assistance at about noon. In spite of the efforts of the remaining crew and passengers to save the ship, the inevitable happened at about 8 PM that same day. The wreck claimed the lives of 425 men, and carried the valuable cargo to the bottom of the sea.

It was not until late 1980s that technology allowed recovery of shipwrecks at deep sea. But no technology would be of any help without an accurate location of the site. In the following paragraphs we would like to illustrate the power of the `scipy` stack by performing a simple simulation, that ultimately creates a dataset of possible locations for the wreck of the SS Central America, and mines the data to attempt to pinpoint the most probable target.

We simulate several possible paths of the steamboat (say 10,000 randomly generated possibilities), between 7:00 AM on Saturday, and 13 hours later, at 8:00 pm on Sunday. At 7:00 AM on that Saturday the ship’s captain, William Herndon, took a celestial fix and verbally relayed the position to the schooner *El Dorado*. The fix was 31º25′ North, 77º10′ West. Because the ship was not operative at that point—no engine, no sails—, for the next thirteen hours its course was solely subjected to the effect of ocean current and winds. With enough information, it is possible to model the drift and leeway on different possible paths.

We start by creating a **data frame**—a computational structure that will hold all the values we need in a very efficient way. We do so with the help of the `pandas` libraries.

from datetime import datetime, timedelta from dateutil.parser import parse import numpy as np, pandas as pd interval = [parse("9/12/1857 7 am")] for k in range(14*2-1): if k % 2 == 0: interval.append(interval[-1]) else: interval.append(interval[-1] + timedelta(hours=1)) herndon = pd.DataFrame(np.zeros((28, 10000)), index = [interval, ['Lat', 'Lon']*14])

Each column of the data frame `herndon` is to hold the latitude and longitude of a possible path of the SS Central America, sampled every hour. Let us populate this data following a similar analysis to the one followed by the Columbus America Discovery Group, as explained by Lawrence D. Stone in the article *Revisiting the SS Central America Search*, from the 2010 Conference on Information Fusion.

The celestial fix obtained by Capt. Herndon at 7:00 AM was taken with a sextant in the middle of a storm. There are some uncertainties in the estimation of latitude and longitude with this method and under those weather conditions, which are modeled by a bivariate normally distributed random variable with mean (0,0) and standard deviations of 0.9 nautical miles (for latitude), and 3.9 nautical miles (for longitude). We create first a random variable with those characteristics. Let us use this idea to populate the dataframe with several random initial locations.

from scipy.stats import multivariate_normal celestial_fix = multivariate_normal(cov = np.diag((0.9, 3.9)))

To estimate the corresponding celestial fixes, as well as all further geodetic computations, we will use the accurate formulas of Vincenty for ellipsoids, assuming a radius at the Equator of meters, and a flattening of the ellipsoid of (these figures are regarded as one of the standards for use in cartography, geodesy and navigation, and are referred by the community as the World Geodetic System WGS-84 ellipsoid)

A very good set of Vincenty’s formulas coded in `python` can be found at wegener.mechanik.tu-darmstadt.de/GMT-Help/Archiv/att-8710/Geodetic_py.

In particular, for this example we will be using *Vincenty’s direct formula*, that computes the resulting latitude , longitude , and azimuth of an object starting at latitude , longitude , and traveling meters with initial azimuth . Latitudes, longitudes and azimuths are given in degrees, and distances in meters. We also use the convention of assigning negative values to the latitudes to the West. To apply the conversion from nautical miles or knots to their respective units in SI, we employ the system of units in `scipy.constants`.

from Geodetic_py import vinc_pt from scipy.constants import nautical_mile a = 6378137.0 f = 1./298.257223563 for k in range(10000): lat_delta, lon_delta = celestial_fix.rvs() * nautical_mile azimuth = 90 - np.angle(lat_delta + 1j*lon_delta, deg=True) distance = np.hypot(lat_delta, lon_delta) output = vinc_pt(f, a, 31+25./60, -77-10./60, azimuth, distance) herndon.ix['1857-09-12 07:00:00',:][k] = output[0:2]

Issuing now the command ` herndon.ix['1857-09-12 07:00:00',:]` gives us the following output:

0 1 2 3 4 5 \ Lat 31.455345 31.452572 31.439491 31.444000 31.462029 31.406287 Lon -77.148860 -77.168941 -77.173416 -77.163484 -77.169911 -77.168462 6 7 8 9 ... 9990 \ Lat 31.390807 31.420929 31.441248 31.367623 ... 31.405862 Lon -77.178367 -77.187680 -77.176924 -77.172941 ... -77.146794 9991 9992 9993 9994 9995 9996 \ Lat 31.394365 31.428827 31.415392 31.443225 31.350158 31.392087 Lon -77.179720 -77.182885 -77.159965 -77.186102 -77.183292 -77.168586 9997 9998 9999 Lat 31.443154 31.438852 31.401723 Lon -77.169504 -77.151137 -77.134298 [2 rows x 10000 columns]

We simulate the drift according to the formula . In this formula (the ocean current) is modeled as vector pointing about North-East (around 45 degrees of azimuth) and a variable speed between 1 and 1.5 knots. The other random variable, , represents the action of the winds in the area during the hurricane, which we choose to represent by directions ranging between South and East, and speeds with a mean 0.2 knots, standard deviation 1/30 knots. Both random variables are coded as bivariate normal. Finally, we have accounted for the leeway factor. According to a study performed on the blueprints of the SS Central America, we have estimated this leeway to be about 3%.

This choice of random variables to represent the ocean current and wind differs from the ones used in the aforementioned paper. In our version we have not used the actual covariance matrices as computed by Stone from data received from the Naval Oceanographic Data Center. Rather, we have presented a very simplified version.

current = multivariate_normal((np.pi/4, 1.25), cov=np.diag((np.pi/270, .25/3))) wind = multivariate_normal((np.pi/4, .3), cov=np.diag((np.pi/12, 1./30))) leeway = 3./100 for date in pd.date_range('1857-9-12 08:00:00', periods=13, freq='1h'): before = herndon.ix[date-timedelta(hours=1)] for k in range(10000): angle, speed = current.rvs() current_v = speed * nautical_mile * (np.cos(angle) + 1j * np.sin(angle)) angle, speed = wind.rvs() wind_v = speed * nautical_mile * (np.cos(angle) + 1j*np.sin(angle)) drift = current_v + leeway * wind_v azimuth = 90 - np.angle(drift, deg=True) distance = abs(drift) output = vinc_pt(f, a, before.ix['Lat'][k], before.ix['Lon'][k], azimuth, distance) herndon.ix[date,:][k] = output[:2]

Let us plot the first three of those simulated paths:

import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap m = Basemap(llcrnrlon=-77.4, llcrnrlat=31.2, urcrnrlon=-76.6, urcrnrlat=31.8, projection='lcc', lat_0 = 31.5, lon_0=-77, resolution='l', area_thresh=1000.) m.drawmeridians(np.arange(-77.4,-76.6,0.1), labels=[0,0,1,1]) m.drawparallels(np.arange(31.2,32.8,0.1), labels=[1,1,0,0]) colors = ['r', 'b', 'k'] styles = ['-', '--', ':'] for k in range(3): longitudes = herndon[k][:,'Lon'].values latitudes = herndon[k][:,'Lat'].values longitudes, latitudes = m(longitudes, latitudes) m.plot(longitudes, latitudes, color=colors[k], lw=3, ls=styles[k]) plt.show()

As expected they observe a North-Easterly general direction, in occasion showing deviations from the effect of the strong winds.

The advantage of storing all the different steps in these paths becomes apparent if we need to perform some further study—and maybe filtering—on the data obtained. We could impose additional conditions to paths, and use only those filtered according to the extra rules, for example. Another advantage is the possibility of performing different analysis on the paths with very little coding. By issuing the command `herndon.loc(axis=0)[:,'Lat'].describe()`, we obtain quick statistics on the computed latitudes for all 10000 paths (number of items, mean, standard deviation, min, max and the quartiles of the data).

0 1 2 3 4 5 \ count 14.000000 14.000000 14.000000 14.000000 14.000000 14.000000 mean 31.474706 31.489831 31.479797 31.551953 31.543533 31.516511 std 0.058218 0.060026 0.060504 0.060204 0.060290 0.065008 min 31.388410 31.400693 31.387974 31.457087 31.446786 31.421331 25% 31.431773 31.439437 31.429712 31.506764 31.498132 31.465424 50% 31.470768 31.491100 31.481918 31.551521 31.546649 31.510613 75% 31.515063 31.541353 31.527317 31.599800 31.588251 31.568368 max 31.571535 31.580176 31.575999 31.641745 31.639944 31.619281 6 7 8 9 ... 9990 \ count 14.000000 14.000000 14.000000 14.000000 ... 14.000000 mean 31.463063 31.515181 31.510682 31.448281 ... 31.541765 std 0.064530 0.060904 0.064621 0.061350 ... 0.059633 min 31.369973 31.410879 31.412623 31.353806 ... 31.445757 25% 31.410827 31.473212 31.460292 31.398931 ... 31.500659 50% 31.460968 31.516328 31.511749 31.448647 ... 31.542774 75% 31.512171 31.565814 31.555854 31.492793 ... 31.583822 max 31.564134 31.601316 31.620487 31.547686 ... 31.632320 9991 9992 9993 9994 9995 9996 \ count 14.000000 14.000000 14.000000 14.000000 14.000000 14.000000 mean 31.481608 31.501862 31.509630 31.495412 31.557487 31.491508 std 0.064426 0.061343 0.057857 0.068578 0.058520 0.055164 min 31.384021 31.398002 31.408542 31.387861 31.452803 31.402979 25% 31.422987 31.457732 31.468419 31.440465 31.518770 31.450746 50% 31.489742 31.509546 31.515301 31.503460 31.565790 31.493993 75% 31.532992 31.549224 31.553803 31.545975 31.596499 31.532340 max 31.576120 31.589048 31.591815 31.599973 31.645622 31.577492 9997 9998 9999 count 14.000000 14.000000 14.000000 mean 31.522756 31.509904 31.461305 std 0.055411 0.064045 0.066058 min 31.435634 31.408430 31.363115 25% 31.482015 31.458449 31.405953 50% 31.523253 31.519991 31.463235 75% 31.567399 31.555133 31.516251 max 31.605647 31.605175 31.556127 [8 rows x 10000 columns]

The focus of this simulation is, nonetheless, on the final location of all these paths. Let us plot them all on the same map first, for a quick visual evaluation.

latitudes, longitudes = herndon.ix['1857-9-12 20:00:00'].values m = Basemap(llcrnrlon=-82., llcrnrlat=31, urcrnrlon=-76, urcrnrlat=32.5, projection='lcc', lat_0 = 31.5, lon_0=-78, resolution='h', area_thresh=1000.) longitudes, latitudes = m(longitudes, latitudes) x, y = m(-81.2003759, 32.0405369) # Coordinates of Savannah, GA m.plot(longitudes, latitudes, 'ko', markersize=1) m.plot(x,y,'bo') plt.text(x-10000, y+10000, 'Savannah, GA') m.drawmeridians(np.arange(-82,-76,1), labels=[1,1,1,1]) m.drawparallels(np.arange(31,32.5,0.25), labels=[1,1,0,0]) m.drawcoastlines() m.drawcountries() m.fillcontinents(color='coral') m.drawmapboundary() plt.show()

[TO BE CONTINUED]

]]>I would like to show how to code the NAO robot to beat us at *Jotto* (5-letter Mastermind) with `python`

in `Choregraphe`

. I will employ a brute force technique that does not require any knowledge of the English language, the frequency of its letters, or smart combinations of vowels and consonants to try to minimize the number of attempts. It goes like this:

- Gather all 5-letter words with no repeated letters in a list.
- Choose a random word from that list—your guess—, and ask it to be scored ala Mastermind.
- Filter through the list all words that share the same score with your guess; discard the rest.
- Go back to step 2 and repeat until the target word is found.

Coding this strategy in `python`

requires only four variables:

`whole_dict`

: the list with all the words`step = [x for x in whole_dict]`

: A copy of`whole_dict`

, which is going to be shortened on each step (hence the name). Note that stating`step = whole_dict`

will change the contents of`whole_dict`

when we change the contents of`step`

— not a good idea.`guess = random.choice(step)`

: A random choice from the list`step`

.`score`

: A string containing the two digits we obtain after scoring the guess. The first digit indicates the number of correct letters in the same position as the target word; the second digit indicates the number of correct letters in the wrong position.`attempts`

: optional. The number of attempts at guessing words. For quality control purposes.

At this point, I urge the reader to stop reading the post and try to implement this strategy as a simple script. When done, come back to see how it can be coded in the NAO robot.

First, a screenshot of the `root`

view in choregraph. Note the simplicity of the code.

The first box, `ALMemory starter`

, is a *script box* where we define all variables needed to run the program — old school. This is the code of that box:

from urllib2 import urlopen def no_repeats(word): """ Boolean: indicates whether a word has repeated letters """ new_word = "" for letter in word: if letter not in new_word: new_word += letter return new_word == word # Get a list of 5-letter words from the following URL file = urlopen("http://www.math.sc.edu/~blanco/words.txt") words = file.read().split("\n") file.close() # remove words with repeated letters words = filter(lambda x: len(x)>0, words) words = filter(no_repeats, words) class MyClass(GeneratedClass): def __init__(self): GeneratedClass.__init__(self) pass def onLoad(self): # We start by creating a proxy to the memory of the robot, # where we will insert all the needed variables self.memoryProxy = ALProxy("ALMemory") pass def onUnload(self): pass def onInput_onStart(self): # These are the variables we need: self.memoryProxy.insertData("whole_dict", words) self.memoryProxy.insertData("step", [x for x in words]) self.memoryProxy.insertData("attempts", 0) self.memoryProxy.insertData("score", "00") self.memoryProxy.insertData("guess", None) self.onStopped() pass def onInput_onStop(self): self.onUnload() pass

The second box, `Deal with my word`

, is a *flow diagram* containing two boxes:

- One simple
*script box*,`Say the word`

. This box performs the random choice from`step`

, stores it in the variable`guess`

, and increments the value of`attempts`

by one. It asks then the player if`guess`

is the correct word — and spells it, just in case. - A
*speech recognition box*that expects a yes/no answer.

This is the code of `Say the word`

:

from time import sleep from random import choice class MyClass(GeneratedClass): def __init__(self): GeneratedClass.__init__(self) pass def onLoad(self): self.talkProxy = ALProxy("ALTextToSpeech") self.memoryProxy = ALProxy("ALMemory") pass def onUnload(self): pass def onInput_onStart(self): step = self.memoryProxy.getData("step") guess = choice(step) self.memoryProxy.insertData("guess", guess) attempts = self.memoryProxy.getData("attempts") self.memoryProxy.insertData("attempts", attempts + 1) self.talkProxy.say("Is it %s?" % guess) sleep(1) for letter in guess: self.talkProxy.say(letter) sleep(1) self.onStopped() pass def onInput_onStop(self): self.onUnload() pass

A *switch case* is next: if the player says “yes”, the program concludes with the *script box* `Success!`

, that states the number of attempts. The code is simple, and it illustrates once again the way to access data from `ALMemory`

. The only relevant portions of the code on that box are the methods `onLoad`

and `onInput_osStart`

:

def onLoad(self): self.talkProxy = ALProxy("ALTextToSpeech") self.memoryProxy = ALProxy("ALMemory") pass def onInput_onStart(self): attempts = self.memoryProxy.getData("attempts") self.talkProxy.say("Wonderful! I got it in %s attempts!" % str(attempts)) pass

If the player says “no”, we retrieve the two scores with two *flow diagram* boxes: `First Score`

and `Second Score`

. I will present the structure and code of the latter, and leave the code of the former as an exercise.

The first box is a simple *Say* that asks for the score. The second box is a *Speech Recognition* box that expects any digit from 0 to 5. The third is a *script box* that receives the second box output (as a `string`

), and performs the following operation (only `onLoad`

and `onInput_onStart`

are shown):

def onLoad(self): self.memoryProxy = ALProxy("ALMemory") pass def onInput_onStart(self, p): score = self.memoryProxy.getData("score") score = score[0] + p self.memoryProxy.insertData("score", score) self.onStopped() pass

And finally, the *script box* `Say the score`

. This is where the magic happens. In this box we receive the complete score, and filter all the words from `step`

to eliminate those that do not share the same score as `guess`

. We perform that with the following code:

def compute_score(guess, target): """ This function scores guess against target, mastermind style """ correct = 0 moved = 0 for x in range(5): if guess[x] in target: if guess[x] == target[x]: correct += 1 else: moved += 1 return str(correct)+str(moved) def register_score(score, step, guess): """ a simple fliter to eliminate unwanted words from step """ return filter(lambda x: compute_score(guess, x)==score, step) class MyClass(GeneratedClass): def __init__(self): GeneratedClass.__init__(self) pass def onLoad(self): self.memoryProxy = ALProxy("ALMemory") self.talkProxy = ALProxy("ALTextToSpeech") pass def onUnload(self): pass def onInput_onStart(self): score = self.memoryProxy.getData("score") step = self.memoryProxy.getData("step") guess = self.memoryProxy.getData("guess") self.talkProxy.say("So, the score of %s is %s %s" % (guess, score[0], score[1])) step = register_score(score, step, guess) #self.talkProxy.say("We are down to %s words" % str(len(step))) self.memoryProxy.insertData("step", step) self.onStopped() pass def onInput_onStop(self): self.onUnload() pass

That is all! I hope you enjoy playing *Jotto* with your NAO. The code can be greatly improved by adding motions in the different steps, or implementing better winning strategies. With brute force, NAO is able to find your word in about 5 attempts (as average, of course). Do you think you can find a strategy that brings this number down to 4 attempts? If so, I would love to hear about it.

- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Back in the 1980s, robotics—understood as autonomous mechanical thinking—was no more than a dream. A wonderful dream that fueled many children’s imaginations and probably shaped the career choices of some. I know in my case it did.

Fast forward some thirty-odd years, when I met *Astro*: one of three research robots manufactured by the French company Aldebaran. This NAO robot found its way into the computer science classroom of Tom Simpson in Heathwood Hall Episcopal School, and quickly learned to navigate mazes, recognize some student’s faces and names, and even dance the Macarena! It did so with effortless coding: a basic command of the computer language `python`

, and some idea of object oriented programming.

I could not let this opportunity pass. I created a small undergraduate team with Danielle Talley from USC (a brilliant sophomore in computer engineering, with a minor in music), and two math majors from Morris College: my Geometry expert Fabian Maple, and a McGyver-style problem solver, Wesley Alexander. Wesley and Fabian are supported by a Department of Energy-Environmental Management grant to Morris College, which funds their summer research experience at USC. Danielle is funded by the National Science Foundation through the Louis Stokes South Carolina-Alliance for Minority Participation (LS-SCAMP).

They spent the best of their first week on this project completing a basic programming course online. At the same time, the four of us reviewed some of the mathematical tools needed to teach Astro new tricks: basic algebra and trigonometry, basic geometry, and basic calculus and statistics. The emphasis—I need to point out in case you missed it—is in the word **basic**.

The psychologist seated herself and watched Herbie narrowly as he took a chair at the other side of the table and went through the three books systematically.

At the end of half an hour, he put them down, “Of course, I know why you brought these.”

The corner of Dr. Calvin’s lip twitched, “I was afraid you would. It’s difficult to work with you, Herbie. You’re always a step ahead of me.”

“It’s the same with these books, you know, as with the others. They just don’t interest me. There’s nothing to your textbooks. Your science is just a mass of collected data plastered together by makeshift theory — and all so incredibly simple, that it’s scarcely worth bothering about.”

“It’s your fiction that interests me. Your studies of the interplay of human motives and emotions” – his mighty hand gestured vaguely as he sought the proper words.

Liar!

— Isaac Asimov

Astro can understand what I say, and maintain conversations with me. He has four microphones for voice recognition and sound localization, and two speakers for text-to-speech synthesis. But he sometimes gets confused: I say “acute”, and he understands “abide,” for example. This might sound frustrating by all standards. But mathematicians can’t be judged by any reasonable standard. Far from seeing this as a shortcoming, my team used as motivation the mechanics behind his *speech recognition* functionality, and they wrote some code to turn him into an expert Mastermind player.

Wait, what? How are these two things possibly connected?

Since Astro’s hearing is far from perfect, when he hears a word which is not clear, he gathers a short list of possible candidates. He then chooses the most likely guess based upon different considerations—using clues from context, for example. Based on similar strategies, he can be taught to be a perfect player of 5-letter-word Mastermind: Pick any five-letter word in the English language. Any word, provided it has no repeated letters. Astro chooses one random word as his first guess, and requests a score like in Mastermind.

To illustrate this process, I will show you how the game went when I chose Fabian’s favorite word: “acute.”

Astro: Is it firth? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 1, 0 Astro: Is it foams? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 0, 1 Astro: Is it junta? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 1, 2 Astro: Is it acute? Me : YES Astro: Wonderful, I found it in 4 attempts!

I don’t want to get too technical here, but will mention some of the ideas. The main algorithm is based on techniques of numerical root finding and solving nonlinear equations — nothing complex: high-school level bracketing by bisection, or Newton’s method. To design better winning strategies, my team exploits the benefits of randomness. The analysis of this part is done with basic probability and statistics.

Donovan’s pencil pointed nervously. “The red cross is the selenium pool. You marked it yourself.”

“Which one is it?” interrupted Powell. “There were three that MacDougal located for us before he left.”

“I sent Speedy to the nearest, naturally; seventeen miles away. But what difference does that make?” There was tension in his voice. “There are penciled dots that mark Speedy’s position.”

And for the first time Powell’s artificial aplomb was shaken and his hands shot forward for the man.

“Are you serious? This is impossible.”

“There it is,” growled Donovan.

The little dots that marked the position formed a rough circle about the red cross of the selenium pool. And Powell’s fingers went to his brown mustache, the unfailing signal of anxiety.

Donovan added: “In the two hours I checked on him, he circled that damned pool four times. It seems likely to me that he’ll keep that up forever. Do you realize the position we’re in?”

Runaround

— Isaac Asimov

Astro moves around too. It does so thanks to a sophisticated system combining one accelerometer, one gyrometer and four ultrasonic sensors that provide him with stability and positioning within space. He also enjoys eight force-sensing resistors and two bumpers. And that is only for his legs! He can move his arms, bend his elbows, open and close his hands, or move his torso and neck (up to 25 degrees of freedom for the combination of all possible joints). Out of the box, and without much effort, he can be coded to walk around, although in a mechanical way: He moves forward a few feet, stops, rotates in place or steps to a side, etc. A very naïve way to go from `A`

to `B`

retrieving an object at `C`

, could be easily coded in this fashion as the diagram shows:

Fabian and Wesley devised a different way to code Astro taking full advantage of his inertial measurement unit. This will allow him to move around smoothly, almost like a human would. The key to their success? Polynomial interpolation and plane geometry. For advanced solutions, they need to learn about splines, curvature, and optimization. Nothing they can’t handle.

He said he could manage three hours and Mortenson said that would be perfect when I gave him the news. We picked a night when she was going to be singing Bach or Handel or one of those old piano-bangers, and was going to have a long and impressive solo.

Mortenson went to the church that night and, of course, I went too. I felt responsible for what was going to happen and I thought I had better oversee the situation.

Mortenson said, gloomily, “I attended the rehearsals. She was just singing the same way she always did; you know, as though she had a tail and someone was stepping on it.”

One Night of Song

— Isaac Asimov

Astro has excellent eyesight and understanding of the world around him. He is equipped with two HD cameras, and a bunch of computer vision algorithms, including facial and shape recognition. Danielle’s dream is to have him read from a music sheet and sing or play the song in a toy piano. She is very close to completing this project: Astro is able now to identify partitures, and extract from them the location of the pentagrams. Danielle is currently working on identifying the notes and the clefs. This is one of her test images, and the result of one of her early experiments:

Most of the techniques Danielle is using are accessible to any student with a decent command of vector calculus, and enough scientific maturity. The extraction of pentagrams and the different notes on them, for example, is performed with the Hough transform. This is a fancy term for an algorithm that basically searches for straight lines and circles by solving an optimization problem in two or three variables.

The only thing left is an actual performance. Danielle will be leading Fabian and Wes, and with the assistance of Mr. Simpson’s awesome students Erica and Robert, Astro will hopefully learn to physically approach the piano, choose the right keys, and play them in the correct order and speed. Talent show, anyone?

]]>Let so that too.

Show that and

I got this problem by picking some strictly positive value and breaking the integral as follows:

Let us examine now the factors and above:

We have thus proven that with At this point, all you have to do is pick (provided the denominator is not zero) and you are done.

]]>But I am not entirely happy with what I see: my lack of training in the area of Combinatorics results in a rather dry treatment of that part of the mindmap, for example. I am afraid that the same could be told about other parts of the diagram. Any help from the reader to clarify and polish this information will be very much appreciated.

And as a bonus, I included a script to generate the diagram with the aid of the `tikz` libraries.

\tikzstyle{level 2 concept}+=[sibling angle=40] \begin{tikzpicture}[scale=0.49, transform shape] \path[mindmap,concept color=black,text=white] node[concept] {Pure Mathematics} [clockwise from=45] child[concept color=DeepSkyBlue4]{ node[concept] {Analysis} [clockwise from=180] child { node[concept] {Multivariate \& Vector Calculus} [clockwise from=120] child {node[concept] {ODEs}}} child { node[concept] {Functional Analysis}} child { node[concept] {Measure Theory}} child { node[concept] {Calculus of Variations}} child { node[concept] {Harmonic Analysis}} child { node[concept] {Complex Analysis}} child { node[concept] {Stochastic Analysis}} child { node[concept] {Geometric Analysis} [clockwise from=-40] child {node[concept] {PDEs}}}} child[concept color=black!50!green, grow=-40]{ node[concept] {Combinatorics} [clockwise from=10] child {node[concept] {Enumerative}} child {node[concept] {Extremal}} child {node[concept] {Graph Theory}}} child[concept color=black!25!red, grow=-90]{ node[concept] {Geometry} [clockwise from=-30] child {node[concept] {Convex Geometry}} child {node[concept] {Differential Geometry}} child {node[concept] {Manifolds}} child {node[concept,color=black!50!green!50!red,text=white] {Discrete Geometry}} child { node[concept] {Topology} [clockwise from=-150] child {node [concept,color=black!25!red!50!brown,text=white] {Algebraic Topology}}}} child[concept color=brown,grow=140]{ node[concept] {Algebra} [counterclockwise from=70] child {node[concept] {Elementary}} child {node[concept] {Number Theory}} child {node[concept] {Abstract} [clockwise from=180] child {node[concept,color=red!25!brown,text=white] {Algebraic Geometry}}} child {node[concept] {Linear}}} node[extra concept,concept color=black] at (200:5) {Applied Mathematics} child[grow=145,concept color=black!50!yellow] { node[concept] {Probability} [clockwise from=180] child {node[concept] {Stochastic Processes}}} child[grow=175,concept color=black!50!yellow] {node[concept] {Statistics}} child[grow=205,concept color=black!50!yellow] {node[concept] {Numerical Analysis}} child[grow=235,concept color=black!50!yellow] {node[concept] {Symbolic Computation}}; \end{tikzpicture}]]>

I would like to show a few more examples of beautiful curves generated with this technique, together with their generating axiom, rules and parameters. Feel free to click on each of the images below to download a larger version.

Note that any coding language with plotting capabilities should be able to tackle this project. I used once again `tikz` for , but this time with the tikzlibrary `lindenmayersystems`.

Would you like to experiment a little with axioms, rules and parameters, and obtain some new pleasant curves with this method? If the mathematical properties of the fractal that they approximate are interesting enough, I bet you could attach your name to them. Like the astronomer that finds through her telescope a new object in the sky, or the zoologist that discover a new species of spider in the forest.

]]>- It also goes through the feet of the heights, points
- If denotes the orthocenter of the triangle, then the Feuerbach circle also goes through the midpoints of the segments For this reason, the Feuerbach circle is also called the
**nine-point circle.** - The center of the Feuerbach circle is the midpoint between the orthocenter and circumcenter of the triangle.
- The area of the circumcircle is precisely four times the area of the Feuerbach circle.

Most of these results are easily shown with `sympy` without the need to resort to Gröbner bases or Ritt-Wu techniques. As usual, we realize that the properties are independent of rotation, translation or dilation, and so we may assume that the vertices of the triangle are and for some positive parameters To prove the last statement, for instance we may issue the following:

>>> import sympy >>> from sympy import * >>> A=Point(0,0) >>> B=Point(1,0) >>> r,s=var('r,s') >>> C=Point(r,s) >>> D=Segment(A,B).midpoint >>> E=Segment(B,C).midpoint >>> F=Segment(A,C).midpoint >>> simplify(Triangle(A,B,C).circumcircle.area/Triangle(D,E,F).circumcircle.area) 4

But probably the most amazing property of the nine-point circle, is the fact that it is tangent to the incircle of the triangle. With exception of the case of equilateral triangles, both circles intersect only at one point: the so-called **Feuerbach point**.

The two results that we want to prove, whose synthetic proofs can be read in [this article] are the following:

Theorem 1.The circle through the feet of the internal bisectors of triangle passes through the Feuerbach point of the triangle.

Theorem 2.The Feuerbach point of triangle is the anti-Steiner point of the Euler line of the intouch triangle of with respect to the same triangle.

Let us prove the first Theorem for the case of a right-triangle. We only need to modify the definition of the point to guarantee this situation. We set for some positive value of the parameter .

Let us start by collecting the hypothesis polynomials that define the coordinates of the Feuerbach point of the triangle

>>> import sympy >>> from sympy import * >>> A=Point(0,0) >>> B=Point(1,0) >>> var('s',positive=True) s >>> C=Point(0,s) >>> Triangle(A,B,C).incircle.equation() -s**2/(s + sqrt(s**2 + 1) + 1)**2 + (-s/(s + sqrt(s**2 + 1) + 1) + x)**2 + (-s/(s + sqrt(s**2 + 1) + 1) + y)**2

We shall introduce the parameter in our polynomial rings, and relate it to the parameter so that This will allow us to input the incenter, incircle and their interaction with other objects, in a simple polynomial way.

>>> var('u') u >>> expr=Triangle(A,B,C).incircle.equation().subs(sqrt(s**2+1),u) >>> numer(together(expr)) -s**2 + (-s + x*(s + u + 1))**2 + (-s + y*(s + u + 1))**2 >>> T=Triangle( Segment(A,B).midpoint, Segment(A,C).midpoint, Segment(B,C).midpoint) >>> expr=together(T.circumcircle.equation()) >>> numer(expr) -s**2 + (-s + 4*y)**2 + (4*x - 1)**2 - 1

We can then use the following three polynomials to define the Feuerbach point:

Let us now proceed to computing the three feet of the internal bisectors, starting by collecting the coordinates of the incenter:

>>> Ic=Triangle(A,B,C).incenter.subs(sqrt(s**2+1),u) >>> intersection(Line(A,C), Line(B,Ic)) [Point(0, s/(u + 1))] >>> intersection(Line(A,B), Line(C,Ic)) [Point(s/(s + u), 0)] >>> intersection(Line(B,C), Line(A,Ic)) [Point(s/(s + 1), s/(s + 1))]

This gives us the hypothesis polynomials for the coordinates of the three feet:

Note we do not need to include polynomials for or since those are always zero. We proceed to compute a polynomial equation of a circle that goes through the last three points:

>>> var('x2:5 y2:5') (x2, x3, x4, y2, y3, y4) >>> Q1=Point(0,y2) >>> Q2=Point(x3,0) >>> Q3=Point(x4,y4) >>> expr=together(Triangle(Q1,Q2,Q3).circumcircle.equation()) >>> numer(expr) -(y2*(-x3**2 + x4**2 + y4**2) + y4*(x3**2 - y2**2))**2 + (2*x*(x3*y4 - y2*(x3 - x4)) - y2*(-x3**2 + x4**2 + y4**2) - y4*(x3**2 - y2**2))**2 + (-x3*(-x3**2 + x4**2 + y4**2) + 2*y*(x3*y4 - y2*(x3 - x4)) - (x3 - x4)*(x3**2 - y2**2))**2 - (x3*(-x3**2 + x4**2 + y4**2) - 2*y2*(x3*y4 - y2*(x3 - x4)) + (x3 - x4)*(x3**2 - y2**2))**2

We thus have the thesis polynomial for the coordinates of the Feuerbach point to belong on the required circumcircle:

We could use then the following code in `sage` or `sympy` in a `Python` session, to prove the result:

import sympy from sympy import var, perm var('s u x1:5 y1:5) # Hypotheses for the Feuerbach point h1=s**2+1-u**2 h2=-s**2 + (-s + 4*y1)**2 + (4*x1 - 1)**2 - 1 h3=-s**2 + (-s + x1*(s + u + 1))**2 + (-s + y1*(s + u + 1))**2 # Hypotheses for the three feet h4=(u+1)*y2-s h5=(s+u)*x3-s h6=(s+1)*x4-s h7=(s+1)*y4-s # Thesis polynomial g=-(y2*(-x3**2 + x4**2 + y4**2) + y4*(x3**2 - y2**2))**2 + (2*x1*(x3*y4 - y2*(x3 - x4)) - y2*(-x3**2 + x4**2 + y4**2) - y4*(x3**2 - y2**2))**2 + (-x3*(-x3**2 + x4**2 + y4**2) + 2*y1*(x3*y4 - y2*(x3 - x4)) - (x3 - x4)*(x3**2 - y2**2))**2 - (x3*(-x3**2 + x4**2 + y4**2) - 2*y2*(x3*y4 - y2*(x3 - x4)) + (x3 - x4)*(x3**2 - y2**2))**2 # The system is almost triangular, except for polynomials h2 and h3. # Substitute those two by the proper pseudo-reminders wrt y1 h3a=prem(h3,h2,y1) h2a=prem(h2,h3a,y1) # The proof (by the Ritt-Wu method): # If the Theorem is true, the last reminder must be zero. R=prem(g,h7,y4) R=prem(R,h6,x4) R=prem(R,h5,x3) R=prem(R,h4,y2) R=prem(R,h3a,y1) R=prem(R,h2a,x1) R=prem(R,h1)

To prove the second theorem, we need to go over the notion of anti-Steiner points first, and some of their properties:

It is known that there is a one-to-one correspondence between any point in the circumcircle of a triangle with a unique line that goes through the orthocenter, via reflections with the sides of the triangle:

- If is a point belonging to the circumcircle, then the images through reflections with axes the three sides of the triangle are collinear, and the corresponding line goes through the orthocenter. This is called the
**Steiner line**of the point - Similarly, if a line goes through the orthocenter, then the three reflected lines with the sides of the triangle, are concurrent at a point of the circumcircle. This is called the
**anti-Steiner point**of the line

It is relatively easy to prove these two facts with `sympy` (for the geometric computations) and `sagemath` (for the Gröbner bases calculations). For example, for the first property, we could do as follows: Start by defining a function that computes the reflection of a point with respect to a line.

def reflection(Pt,ln): Q=intersection(ln.perpendicular_line(Pt),ln)[0] return Point(2*Q.x+Pt.x, 2*Q.y+Pt.y)

Given a generic point , the three reflections with respect to the sides of the triangle read as follows:

>>> reflection(P,Line(A,B)) Point(x, -y) >>> reflection(P,Line(B,C)) Point(-x + 2*(s**2 + (r - 1)*(r*x + s*y - x))/(s**2 + (r - 1)**2), 2*s*(r*x - r + s*y - x + 1)/(s**2 + (r - 1)**2) - y) >>> reflection(P,Line(A,C)) Point(2*r*(r*x + s*y)/(r**2 + s**2) - x, 2*s*(r*x + s*y)/(r**2 + s**2) - y)

If the point is to belong in the circumcircle of triangle then it must satisfy the corresponding polynomial equation:

>>> numer(together(Triangle(A,B,C).circumcircle.equation())) s**2*(2*x - 1)**2 - s**2 - (r**2 - r + s**2)**2 + (-r**2 + r - s**2 + 2*s*y)**2

This is the (only needed) hypothesis polynomial. The thesis polynomial, that inquires whether the three reflection points are collinear, can be reduced to asking whether the triangle has area zero:

>>> numer(together(Triangle(P1,P2,P3).area)) 2*s**2*(r**2*y - r*y + s**2*y - s*x**2 + s*x - s*y**2)

A quick `sagemath` session proves the result true:

sage: R.<x,y,z,r,s>=PolynomialRing(QQ,5,order='lex') sage: h=s**2*(2*x - 1)**2-s**2-(r**2-r+s**2)**2+(-r**2+r-s**2+2*s*y)**2 sage: g=2*s**2*(r**2*y-r*y+s**2*y-s*x**2+s*x-s*y**2) sage: I=R.ideal(1-z*g,h) sage: I.groebner_basis() [1]]]>