More Open EdTech SDGs
Another year, another controversy born out of the winners of the Nobel prize. When it comes to the bitterly political issue of poverty alleviation, more massive projects probably deserve more recognition than this year’s laureates, Esther Duflo (the second woman Nobel in economics), Abhijit Banerjee and Michael Kremer.
Not saying they should not have won. The Royal Swedish Academy of Sciences awards the prize on academic merits. And the winners have in fact opened new ways of understanding how to tackle poverty issues. Ways that do not interfere with existing power structures, making them ideal for initiatives backed by the Sustainable Development Goals and the multilateral community that advocates for them.
So in line with the idea of pretending to empower when you’re pretty powerless yourself, empirical economics ends up looking like an interesting place to be. Ultimately, a mix of smart and laser-focused incentive mechanisms, coupled with enhanced long-term decision making skills (some would call that education) on those who are subjects of poverty; could bode well for an exciting and relatively accessible experimentation cocktail. All this is doubly true when looking at it from the point of view of information technologies at the service of education and learning.
The LMS: Bottom-of-the-pyramid EdTech
Digging just a little under the vast volume of research from Duflo, Banerjee and the hundreds before them, is enough to realize this line of research is full of limitations that would escape the campus-ridden faculty. If the research methodology looks simple for a Nobel it’s because it is. We’re not praising any brand new statistical thinking here, but the efforts that have finally made centuries old double-blind controlled statistics work in social science research.
But let’s go to the heart of things: How do you successfully create education systems that lead to human and economic development, in a measurable, scalable and replicable way? By way of long-term studies and randomization we get a few starting answers:
- Scaling good ideas is fine, as long as the principles are understood and properly replicated on a large scale. Funnily enough, one of the early findings related to increased educational achievement is classrooms with small group sizes.
- Incentives work, especially when using social pressure. Keeping them as simple as possible is desirable in practice, as it allows for better risk management of unintended consequences, which tend to spiral out of control somewhat easily.
- But there is a difference between the fact that simple principles applied work, and simplistic ideas about behavior. For the most part, straightforward incentives based on extrinsic rewards have a shelf life. This finding is key among other things because it helps us unify loose ideas about behavior, performance and productivity; even the Maslow hierarchy of needs: If you are persuading someone to increase their educational attainment by giving food or cash, a proof that the incentive worked is that, as a result of said achievement, that person will demand the satisfaction of a higher need.
- Many education initiatives are flawed from the start by failing to account for basic ingredients of the schooling formula. Stronger predictors of attainment than class sizes include having classrooms, minimal daily nutritional demands met and stomachs dewormed.
- Last but not least, it’s important to be mindful of the difference between statistical significance and actual human and economic development impact. It is noteworthy that family cash transfers increase primary school enrollment by 3.4% across a variety or context. But it’s also just 3.4%.
From Learning Analytics and LMS Data to Educational-Systemic Analytics
An implication of the Goodhart paradox is that, all things equal, the more encompassing your KPIs are, the more sustainable your model, plan or organization will be. This is why some are already talking in terms of KSIs, “Key Systemic Indicators.”
This means we first move from an individual’s degree to group-based scoring, which at least in principle should motivate teamwork, collaboration and exchange of knowledge. Going one degree further would be discussing performance in terms of faculty or educational leadership’s ability to produce evidence-based outcomes sustainably. Thrown in the discussion is a shift from ROI to ROE, which is akin to stop thinking in terms of cost-benefit, and start adopting a vision and a language of institutional capabilities.
But can a system be built at once? Probably not. Not even the laurates provide a framework for an encompassing blueprint of sound educational systems that optimize their paths out of poverty. In fact, not many are happy about some of the procedural decisions made by charities and NGOs following the awardees’ advice: Achieving statistical significance through randomized controlled trials meant providing nourishment or quality education to some while withholding it for the rest. This poses a complicated landscape for creating new insight.
It is also perilous to draw large conclusions from past findings, especially as we still, many years in already, live in a reproducibility crisis.
Let’s wrap it up by highlighting that the awards are not a recognition of the findings, but the process. An undeniable tour de force, the result of academic, personal and ethical commitments few peers were willing to make. Duflo would spend seasons on end on the Kenyan field, mostly trying to keep villagers hopes up (and her own) as she would move forward with a weeks long behavioral experiment that may or may not pan out; or flying back and forth to secure political will from governments and academia.