sabato, aprile 01, 2017

Humanities

I a started reading this and I had a sudden enlightenment on a related topic.

Nowadays, it is actually possible to engineer everything you would like to, given enough time and resources, something like this or this or this (feel free to add anything you want, of course).

So, the true difference between success and failure is to do this with little resources available. 

To convince you, note that this was not true, say, 500 years ago. Many things were just not known. You barely had some understanding of scientific method (actually, you would have had to wait 15 more years) at your disposal. No matter how hard you would have tried, you could not fly to the moon. Period.

Today, actually, you could send people to Mars, starships to Proxima Centauri, eradicate malaria and maybe also find the limits of quantum mechanics and general relativity.

The art is now to do it without resources. We had a shift from in-depth reflection to tactical thinking. I find it most fascinating and I also find that this will lead to a reevaluation of humanities. If this not anymore that important to have the most profound technical-scientific knowledge, it becomes more and more important to understand interactions between individuals while working toward a goal, to identify cognitive biases leading to poor decisions and to assess the true, often non technical motivations between apparently technical opinions.

venerdì, maggio 31, 2013

Test-driven maths I - convergent sequences

This is my first post for my test-driven maths project. I want to show today that working in the paradigm of test-driven-development, we can develop a working definition of a convergent sequence. Metaphorically speaking, we want to develop a mathematical "program" that, given a sequence, says to us that this sequence is convergent in some useful sense.

I will start with an easy example (which will be our first test). We look at the sequence of numbers, which we will call

Test sequence 1

$$1, \frac{1}{2},  \frac{1}{3},  \frac{1}{4}, ...$$

or, in other, terms $\{\frac{1}{n}\}_n$

It is clear (by intuition) that the numbers in this sequence (in the following sequence 1) become smaller and smaller approaching, but never touching 0. For this reason we will use this as our first test case, and try to derive a formal definition of what is a sequence of numbers that converges to 0.

By looking at the sequence 2 things become apparent: 1) the numbers get smaller and smaller and 2) the  numbers always are positive. So we try our first defintion.

Convergent sequences, take 1

A sequence of positive numbers is said to be convergent to 0 if the numbers become smaller and smaller.

Let us try now to put it in more formal terms


Convergent sequences, take 2

A sequence of positive numbers $x_n\geq0$ is said to be convergent to 0 if $x_{n+1}< x_n$ for all $n$ index of the sequence.

Since now $\frac{1}{n+1}<\frac{1}{n}$, this definition seems to include our test tesequence 1, in the sense that according to this definition, our sequence converges to 0. Can we stop now? No. We have written a small test (checking whether the sequence 1 is converging) and a small piece of code (our take 2). But our mathematical insight is not yet satisfied, because our test does not cover many possible inputs (in form of test sequences of course). So, we have to extend our test.

In particular, it is maybe useful to have a sequence of which we know (always by intuition) that it does not converge to 0 so that we can check that our definition also fails when it must. The simplest thing to do is to consider the sequence 1 and add 1 to all members.


Test sequence 2

$$2, 1+\frac{1}{2},  1+\frac{1}{3},  1+\frac{1}{4}, ...$$

Now, since our sequence is composed of decreasing positive numbers, our tentative definition would call it convergent. Since we know that this sequence does not converge, that means that our program (take 2) does not pass the test. In fact the point is that our test sequence 2 is always at least 1 away from 0. So, let us add to our definition that the sequence cannot have a definite distance from 0.

Convergent sequences, take 3

A sequence of positive numbers $x_n\geq0$ is said to be convergent to 0 if $x_{n+1}< x_n$ for all $n$ index of the sequence and for any positive number $\epsilon$, it is not true that all numbers in the sequence are larger than $\epsilon$.

This looks good. Let us build some new test to check whether we are really there. Say, we take the sequence 2 and we put some 0 here and there. The resulting sequence should not converge according to our definition, since we are not getting closer and closer to 0 with all numbers!


Test sequence 3

$$2, 0, 1+\frac{1}{2},  0, 1+\frac{1}{3},  0, 1+\frac{1}{4}, ...$$

Now we have problem. Since we have 0s over and over again in our sequence, we cannot find an $\epsilon$ such that all numbers are larger than that, so for that reason the sequence would be classified as convergent. But: since we inserted 0 over and over again, the numbers are not decreasing, and the sequence is classfied as not convergent for that reason. So, it looks that we pass the test, but for the wrong reason. Let us keep in mind that there is some problem with the decreasing property and let us correct the part regarding the distance from 0.


Convergent sequences, take 4

A sequence of positive numbers $x_n\geq0$ is said to be convergent to 0 if $x_{n+1}< x_n$ for all $n$ index of the sequence and for any positive number $\epsilon$, we can find some index $k$ (dependent on $\epsilon$) such that all numbers with index larger than $k$ are smaller than $\epsilon$.

Now we are on the safer side with the test sequences 2 and 3. Indeed, if I choose my $\epsilon = 0.9$ I am not able to find any index such that the numbers with larger index are smaller than 0.9, since I have over and over again some $1+\frac{1}{n}$ popping up in my sequence. Are we still on the safe side with sequence 1? Yes, since if I choose $k=\frac{1}{\epsilon}$, it is clear that for all larger index the numbers in the sequence are smaller than $\epsilon$ (this is undergraduate algebra, just try it). Note that now the test sequence 3 is classified as not convergent for both not being decreasing and for being not arbitrary small.

Now let us go back to the problem with the decreasing sequences.. If I have a sequence of numbers and scramble the order, I do not want that this scrambling changes whether we call the sequence convergent or not. So, we come with a second test sequences that has to converge.


Test sequence 4

$$\frac{1}{2},  1,  \frac{1}{4}, \frac{1}{3}, ...$$

We just switched the position of the neighbours. Now, this sequence is intuitevly convergent, but our take 4 says it is not, since the elements are not decreasing. So, what if we drop the assumption of having decreasing numbers?


Convergent sequences, take 5

A sequence of positive numbers $x_n\geq0$ is said to be convergent to 0 if for any positive number $\epsilon$, we can find some index $k$ (dependent on $\epsilon$) such that all numbers with index larger than $k$ are smaller than $\epsilon$.

This sounds familiar. Surprising as it is, mathematicians at work really use (more or less consciously) this method for finding good new definitions, and, even more surprisingly, as we will see soon, to find proofs of theorems!

A test-driven revival

Since more than 1 year I left Freiburg and the BCF to start working in the development of MEMS with Bosch GmbH. For some complicated reasons connected to my work there, I've got involved in software engineering and in particular in test-driven development. But only today I realised why I've got involved there and why I like it.

In fact, test-driven development is a kind of "formalization" of how mathematicians actually work!

In particular, I found it complying with Gower's pedagogical principle.

Following this intuition, I will try in the next days (months?) to revive this blog, and to show that what computer scientists rediscovered in the middle of the of 90ies as test-driven development is nothing but what mathematicians are doing since centuries.

martedì, ottobre 02, 2012

Animali Selvatici

Circa un anno che non scrivo, eh? Questo è dovuto a due fattori principali:

 1 - ho abbandonato le neuroscienze e, in generale, il mondo dell'università, per andarmene a fare l'ingegnere di MEMS alla fabbrica Bosch di Reutlingen.

 2 - circa 7 mesi fa ci siamo presi in casa un animale feroce. Non vi dico la razza a meno di evitare visite dal Tierschutzamt o simili, ma solo che richiede molto cure. Sui suoi sviluppi vi aggiornerò prontamente...

sabato, ottobre 08, 2011

Modelli computazionali del Parkinson

Qualche giorno fa ci è stato accettato un articolo in cui esploriamo con dei modelli le possibili cause di alcuni sintomi del morbo di Parkinson.

Il morbo di Parkinson è caratterizzato da deficit motori e cognitivi. Fra questi, il più conosciuto è il tremore. Questi sintomi hanno un correlato neurale molto preciso: il segnale elettrico nel nucleo subtalamico mostra delle oscillazioni molto marcate intorno alla frequenza di 20 Hz, assenti nello stato sano del cervello. È stato scoperto che sopprimere queste oscillazioni, ad esempio tramite la stimolazione cerebrale profonda, porta alla scomparsa quasi immediata dei sintomi.

Nel nostro studio (e qua andremo un po' sul tecnico) abbiamo tentato di portare un po' di chiarezza sulle possibili cause di queste oscillazioni. Quello che si sa con certezza (più o meno) è che queste oscillazioni sono generate tramite un processo di feedback negativo-positivo tra il nucleo subtalamico e la parte esterna del globo pallido. La teoria dominante sulla causa dell'insorgere delle oscillazioni è l'aumento della connettività tra queste due strutture. Purtroppo, però, i dati sperimentali non supportano questa ipotesi.

La nostra teoria alternativa prevede che le oscillazioni siano generate da un livello di attività maggiore nella struttura a monte del globo pallido: il corpo striato. Il globo pallido riceve la maggior parte dei suoi input (di tipo inibitore) dal corpo striato. Quello che noi abbiamo mostrato in un modello computazionale è che un aumento dell'input inibitore al globo pallido è da solo in grado di generare oscillazioni.

martedì, settembre 13, 2011

Podcasts & co

Oggi il nostro responsabile per le relazioni esterne ha messo in rete la versione italiana di What are computational neuroscience?

(Tradotta e detta dal sottoscritto)

martedì, luglio 26, 2011

Networks with distance dependent connectivity, part I

Today I will give a short tutorial about the generation of random networks with distance dependent connectivity. Which means: we place the nodes somewhere in the space and we connect them with some probability which depends on the distance. Here you can find the python script.

Let me first discuss how to construct the matrix of distances between a set of vectors \{v_i\}. The idea is, obviously, to use the fact that the p-distance between two vectors is given by the formula
d_p(v_1,v_2) = \|v_1-v_2\|_p = \left(\sum_k (v^k_1-v^k_2)^p\right)^{1/p} 

For the Euclidean distance $p=2$ and we have
\|v\|^2 = (v,v) 
. So the squared distance is nothing but
\|v_1-v_2\|^2 = (v_1-v_2,v_1-v_2) 

This is good, then using the linearity of the scalar product, we obtain
\|v_1-v_2\|^2 = \|v_1\|^2 + \|v_2\|^2 -2(v_1,v_2) 
This expression can be computed with matrix multiplications. In python you can do it using numpy as follows. First, onstruct the matrix of the positions, i.e. stack all 'size' vectors of lenght 'dimension' on the top of each other
import numpy
dimension = 2
size = 100
positions = numpy.random.uniform(0,1, (size,dimension))
Here I have chosen uniformly distributed vectors, but you can use others of course.
Now, we construct the matrix s_{ij} = \|v_i\|^2 +\|v_j\|^2 by repeating, reshaping and transposing the vector of the norms. This is as easy as this
# construct the matrix s_ij = |v_i|**2+|v_j|**2
norms = numpy.sum( positions**2. , axis = 1 )
tmp = numpy.reshape(norms.repeat(size),(size,size))
sum_matrix = tmp + tmp.transpose()
'sum_matrix' is what you are looking for. The scalar product is even easier. Indeed the matrix of the products x_{ij} = (v_i,v_j) is just the multiplication of the vector matrix with its transpose (try on 2x2 example to see that it works). So you can do it easily by
# construct the matrix x_ij = (v_i,v_j)
scalars = numpy.dot(positions,positions.transpose())

mercoledì, luglio 13, 2011

Testamento biologico: un'assurdità

L'applicazione dei biotestamento scatta solo per chi è "nell'incapacità permanente di comprendere le informazioni circa il trattamento sanitario e le sue conseguenze per accertata assenza di attività cerebrale integrativa cortico-sottocorticale e, pertanto, non può assumere decisioni che lo riguardano"


Ma si può? Tanto valeva vietarlo, il testamento biologico.

martedì, giugno 28, 2011

Retroactive facilitation (updated)

Qualche tempo fa, vi avevo raccontato di un articolo, apparso nel Journal of personality and social psychology riportava di alcuni effetti osservati in esperimenti di larga scala spiegabili solo con la precognizione. Ovviamente, ne era seguito un furioso dibattito. La mia obiezione 5) era:
Alcuni matematici olandesi hanno polemizzato col tipo di test statistici utilizzati da Bem, affermando che è necessario usare test più raffinati. Per prevenire questa obiezione, Bem ha utilizzato test statistici considerati standard nel campo della psicologia. Per cui la critica di Wagenmakers et al. è un po' a doppio taglio, perchè, se giusta, invaliderebbe più o meno tutta la ricerca nelle scienza sociali (e non solo) fatta negli ultimi 50 anni.
Ho continuato a seguire il dibattito per un po', e per questo ed altri motivi, ho cominciato a studiare un po' meglio le tecniche di inferenza Bayesiana.

Circa un mese fa è apparso un articolo in cui Rouder e Morey migliorano il metodo Bayesiano utilizzato precedentemente per realizzare meta-analisi dei dati raccolti da Bem. Il metodo utilizzato in quell'articolo, infatti, non teneva conto del fatto che tutti gli esperimenti di Bem andavano nella stessa direzione.

Armati di questo nuovo metodo, hanno rianalizzato i dati di Bem, trovando per uno dei 4 tipi di stimoli un fattore di Bayes di 40, che è "noteworthy", per citare gli autori, ma non sufficiente evidenza per effetti di precognizione, contraddicendo le conclusioni di Bem.

A che pro tutto questo discorso? Per prima cosa per motivarvi a leggere l'articolo di Rouder e Morey, che è molto chiaro riguardo i rischi dei test di verifica di ipotesi. In secondo luogo, per convincere gli scettici tra di voi (a buon intenditor...) che Bayes è superiore ai test di verifica di ipotesi, secondo i quali, adesso, dovremmo tutti credere all'ESP.