A CONCEPTUAL FRAMEWORK FOR A DYNAMIC DATA ECOSYSTEM
FOR AGRICULTURE AND THE ENVIRONMENT
Primary Mission statement
“Build a Comprehensive Data Network for Agriculture and the Environment”
The Persephone project has arisen out of the ongoing need to find a comprehensive solution to our growing agricultural and environmental woes. It presents a framework in which to holistically address our shared problems and to collectively design equitable and efficient solutions to them.
The main objective of the Persephone Project is to build a comprehensive ‘closed loop’ data framework to improve the production, sustainability, quality and consumption of agricultural products.
The Persephone Project further recognizes that stakeholders are integral to and the ultimate object of the system
and so will adopt a ‘Wealth Creation through Distribution’ policy based on Metcalf’s law. The project has the ambitious target of reaching and including a billion small farmers over the next seven years in order to provide:
1) Access to relevant agricultural data
2) Tools to manage natural resources equitably and sustainably
3) Access to virtual and physical marketplaces
4) Automated mechanisms to manage produce logistics and authenticate
In addition the framework will aim to generate significant ‘real time’ land use and environmental data at the local, state and
national scales in order to provide markets and governmental bodies with accurate crop and environmental data.
The Persephone ecosystem is modular rather than a single homogeneous structure, with ‘no component within the network relying on another to function and no component essential to the function of the network as a whole’. The Persephone ecosystem is therefore robust and negates the potential for a single point of failure in the network. It’s modular design contains four core and two sympathetic components:
Despoena – Agricultural Resource Database and Management Tools
Paradigm – Agricultural and Environmental Database
Gateway – Entry level Crypto-currency tailored for subsistence farmers
VAMp – Virtual Agricultural Marketplaces
Juggernaut – Semi Autonomous Produce Logistics
IPAL – Independent Produce Authenticity Log
The Persephone ecosystem will be built upon the the ADS stack concept of Application, Database and Storage layers. Whilst it is a complete network capable of supplying critical data to small farmers, opening access to wider markets and improving produce logistics and authenticity it will retain the ability to add other features and to communicate and inter-operate with other electronic networks built upon the same stack architecture.
Gateway – Entry level Crypto-currency tailored for subsistence farmers
Despoena – Agricultural Resource Database and Management Tools
Paradigm – Agricultural and Environmental Database
VAMp – Virtual Agricultural Marketplaces
Juggernaut – Semi Autonomous Produce Logistics
IPAL – Independent Produce Authenticity Log
Gateway (GTE) – Entry Level Crypto-currency.
A stand alone crypto-currency Gateway will function as a means to raise capital (ICO) to build the other components and then as an entry level crypto-currency for subsistence farmers. With permissioned nodes, large blocks and only verified wallet owners Gateway will be a secure, low cost and user orientated blockchain . Additionally Gateway will introduce three
novel features with Wealth Creation, Wallet Recovery and Exodus (a unique solution to the POW difficulty problem).
Despoena: Agricultural Resource Database and Management Tools
Despoena is a database of land use and resource availability complimented by a suite of apps designed to aid crop, soil, livestock and resource management at the farm, catchment and regional levels. At its heart is the Land Use Inventory (LUI); a self performed audit of the farm and it’s resources. By performing the audit and uploading the data to the network the farmer will earn tokens and gain entry to the Gateway token ecosystem. The data in the LUI will then be matched with scientific data in Paradigm to provide the farmer with soil, crop and livestock husbandry advice. The LUI will also supply data to DAO’s which in turn will provide resources management services at the catchment and regional level. Other DAO’s will keep markets via the VAMp informed of crop progress through out the growing season.
Paradigm: Agricultural and Environmental Database
The worlds environmental knowledge in a single accessible commons database is the aim of Paradigm. A structured searchable database that will power the applications of Despoena. Moving beyond agriculture Paradigm will grow to become a repository for all our knowledge on the planets biosphere; a comprehensive library to sustainable manage the Planet’s ecosystem.
VAMp : Virtual Agricultural Marketplaces
Virtual Agricultural Marketplaces will be built so that farmers can advertise their produce to physical marketplaces.
Real time data from the DAO’s and the management apps will keep the VAMp up to date with crop progress and expected harvest dates. It is envisaged that the VAMp will encourage a dynamic relationship between the farmers and the physical markets
to evolve so that the two plan together. An ‘Agricultural Bazaar’ where farmers can use their Gateway (GTE) tokens to buy
and sell certified seeds, tools and other aids to crop production is also envisaged. Both aspects will be engineered and
managed to encourage sustainable development and generate real value into the Gateway token ecosystem.
Juggernaut: Semi Autonomous Produce Logistics
Juggernaut is a complimentary service to the existing transport and logistic industries. Using mapping technology it will
connect disparate entities in the supply chain and then plan the logistical movements of those goods. With the aim
to reduce transport costs, improve delivery times and reduce cargo losses Juggernaut is seen as an intermediate stage to a
fully autonomous logistics network.
IPAL: International Produce Authenticity Log
IPAL is a blockchain supply chain tool to track the movement of goods From Farm to Fork and provide an authenticity
log to the final consumer.
Whilst all the components in the Persephone ecosystem are stand alone, (none are integral to the whole system), each component will be designed with the others in mind to give fast seamless operation across the network. Separation of function serving to make upgrades and maintenance to the system easier and providing robustness by negating the potential for a single point of failure or attack to the network. The ecosystem herein described should not be regarded as extensive but a summary of what can be achieved.
The executive summary of GODAN’s recent discussion document ‘A Global Data Ecosystem for Agriculture and Food’, (the cover of which manages to somewhat capture the problem with the modern agricultural environment), calls for:
“..a common data ecosystem, produced and used by diverse stake-holders, from smallholders to multinational conglomerates, a shared global data space..”
The report identified stakeholder engagement, provenance in data sourcing and handling, sharing, and collaborative frameworks as key components in developing a global data ecosystem.
Stakeholder Engagement and Data Integrity
However within the agricultural sector “many groups might not have obvious motivation to participate in data sharing and use…” and that “..in order to get trust-worthy data, there has to be a direct reward to the data supplier.” The authors further state that “a large part of the motivation for data sharing has to do with how widely it will be shared, with whom and under what conditions.”
There is, justified or otherwise, suspicion that data may be misappropriated to the provider’s dis-advantage or provide disproportionate advantage to others. The perceived risk of negative unforeseen consequences can outweigh any potential benefits of sharing data, particularly when those benefits can not be so readily quantified or realized in the short term.
Stakeholders may develop a big brother mentality where they respond by withholding data or deliberately providing inaccurate data in the belief they are better served. This problem is amplified in the provenance of agricultural products, which “undergo a chain of transformations and pass through many hands on their way to the final consumer”. One drop in the veracity of data at any point in the chain potentially undermines all the data in that chain. These issues are sadly though not just relevant to small farmers and supply chain operators but are as prevalent and strongly held by many of the big data holders such as trans-national corporations, governments and academic institutions.
Whilst the integrity of the source and the veracity of the data are important factors in building a global data ecosystem the authors further identified ‘documentation, support and interaction’ as key to fostering trust. Data providers and users need to interact so as to serve each others needs better and ensure that stakeholders feel included not just sampled. Stakeholders need to be confident that there are no negative consequences or disproportionate benefits from sharing data to the whole ecosystem.
Where the data is held, who maintains it, the veracity, accessibility and availability to the whole ecosystem as well as who pays to deliver those services are issues that also need to be addressed. A global data ecosystem cannot rely on single large repositories to act as data silo’s or individual data providers to maintain data crucial for network function. Data needs to be distributed and maintained across the system to prevent bottle necks and failure points . The concept of the ADS (application database storage) network which exploits the distributed network concept could potentially offer resolutions to many if not all these issues.
Data Conformity and Convention
Whilst stakeholders need an environment that is transparent, robust and secure, the data, as does all the documentation and support in that environment, needs to conform to certain conventions. The ‘five star open data maturity model (available, structured, non-proprietary format, referenceable and linked)’ lays out a basic checklist but these properties themselves need to further conform to taxonomies and naming conventions (controlled vocabularies) that are inter disciplinary and facilitate data from different sources being easily related. Conventions which must themselves be explained in and applied to any documentation and support.
“In order to get trust-worthy data, there has to be a direct reward to the data supplier”
For large stakeholders, governments and corporations that reward may come from the need to provide proof in meeting sustainable development goals and climate commitments, but with smaller stakeholders the same incentives may not apply. The question needs to be asked “what’s the data worth?” or more importantly “what is the cost of not having the data?” Can we achieve global sustainability goals and climate objectives without the majority of stakeholders taking part? If we can’t, is it worth weighting benefits in the short term to favour the smaller stakeholders to encourage them? Even weighting that benefit in the form of payment for engaging, and if so can technologies such as blockchain be used to verify data and facilitate those payments? One possible use for such a mechanism would be for the annotation of data such as satellite imagery.
The authors draw attention to the fact that sharing data is only the start; “It is one thing to share data, but to achieve the desired gains from a data ecosystem for agriculture, to draw conclusions across the globe to guide decision making, it is necessary to exploit synergy between datasets efficiently.”
Such synergies however arise out of a framework that extends beyond purely agricultural data to one that includes all environmental data. It is a framework that similarly needs to be able to seamlessly integrate with more mundane economic, sociopolitical and legal data and frameworks, an integration that will itself give rise to greater synergies between our economic activities and their environmental consequences. Di-Functional Modelling (DFM), what most of this site is dedicated too, is one such framework.
Di-Functional Modelling (DFM)
Designed around the concept of soil fertility DFM was created to model the processes and resources that contribute to the sustainable management of an environmental project. In the normal course these would be the soils of an agricultural unit, a group of units or a component in a unit such as a field, forest or grassland.
DFM is not though a database, blockchain or application but a framework or ‘ecosystem’ within which the inter-dependencies of the whole system can be more easily visualized. DFM can thus assist in the development of databases, blockchains and applications that are inter-operable and can exchange and verify environmental and agricultural data [Data Databases and Distributed Networks].
DFM similarly models the processes and functions of an agricultural system relative to the whole. A whole that further extends to the interactions and exchanges that occur between natural systems and the socioeconomic systems they support. These sociopolitical, economic and legal system are themselves nested within the model.
These inner mechanisms are connected to the environment by existing supply chain mechanisms, data from which can reveal the true sustainability or carbon footprint of agricultural goods [TRASE]. Further enhancement of these mechanisms with relevant data should make it possible to trace the ingredients of a chocolate bar from field to retail outlet, every step and any within to give a grand total of the true cost of the indulgence in terms of carbon, habitat or social impact. Once calculated the totals could be added to your own personal tally of GHG emissions, habitat loss and social deprivation. [strengthening the food chain with the block chain]
DFM was though conceived for and is best used to help determine localized land use, crop choices and management strategies based on the available resources and the soil, habitat and hydrological properties. It was not envisaged as a top down tool but as a tool to be applied at farm end; to provide a means to both audit the farm and it’s resources and structure that audit in a way that facilitates integrating scientific data. By repeating the process on successive farms and linking those farms through a content management system each audit would contribute to a greater one permitting each unit to enhance it’s own data with that of neighbouring farms. Extended over a region and the framework would help to manage and allocate resources, plan crop choices and integrate with the natural environment: A Shared Global Data Ecosystem that mirrors the Shared Global Ecosystem we call home.
Towards a Data Ready Farm
The Sustainable Farm
The sustainable farm and by extension a sustainable agricultural sector and planet, is one underpinned by knowledge and driven by data. Knowledge and data that can contribute to crop and livestock choices, resource management and ultimately reveal the sustainability, or not, of an enterprise.
The data ready farm is thus aware of it’s own resources, the resources of the surrounding environment and the relationship it has with those resources and the markets it supplies.
Local Knowledge: A Land Use Inventory
Whilst technology has a significant role to play, the data ready farm begins with knowledge of itself, the land use (woodland, cultivated, grassland), the inherent properties (soil and water resources), as well as the livestock and crops that depend on those resources. It is a simple inventory at the local scale; one which requires no equipment to perform.
Land Use Woodland, Cultivated, Grassland Inherent Properties Soil Texture and Water Resources Land Dependants Livestock and Crop Choices
The inventory should distinguish land use according to basic habitat criteria: woodland, grassland and cultivated. As this is a farm the cultivated habitats further differentiate into arable (short rotation), permanent (orchards, vineyards, etc) or heterogeneous (covered crops, flowers, etc). The woodlands and grasslands similarly differentiate but at this point only grasslands land connected with farming, pasture and rough grazing, need to be differentiated. The boundaries between and within the habitats, along with any hedgerows, fences or banks on those boundaries, and the position of any wells, standing or running water within them should also be recorded and mapped. Even if the farm appears homogeneous, has only one land use, crop or livestock, it is still likely made up of several parcels of land with varying properties; properties that are not easily visible in themselves but can be revealed by the recording and analysis of simple data, such as soil texture.
hand textural chart by S Nortcliff and JS Lang from Rowell (1994)
Soil texture, a property that arises out of the relative proportions of sand silt and clay strongly influences the hydrological and nutrient characteristics of the soil. Variations in soil texture across a field or farm can thus reveal changes in the hydrology or nutrient status of the soil.
Soil texture can be measured by taking a small sample of soil from just below the surface (10cm). Moistened with water or spit the sample is then moulded with the hands into a ball. The ball is then deformed and it’s malleability noted and checked against a chart. The sample is usually taken along a ‘W’ transect positioned across the face of a field and the data bulked to provide a single textural class for that field/plot. All that now remains is to quantify the livestock and crop choices that depend on the land; at this point it is jut to list the type, number and location of stock and crops. This basic reconnaissance map, which needs no equipment to create, can be drawn onto a piece of paper to identify the land use, crop choices, soil texture, location of water and number of livestock.
A Local Inventory in a Global Context
With remote sensing and mobile technology the inventory and soil data could be annotated directly onto a map from the field. Coupled with Geo-statistical strategies this could be further developed to create complex contour maps of textural variation across the agricultural landscapes. With additional external scientific, environmental and economic data this local inventory could be qualified relative to a global economy
Into this inventory scientific data relevant to the sustainable management of resources and the husbandry of crops and livestock can be appended.
Meteorology Quarterly precipitation figures. Crop Data Nutrition, culture, pest and disease, Livestock Data Nutrition, stocking numbers, general husbandry. Soil Mineral data 345 nutrient model
Integration of environmental data can help the farm be sympathetic to the needs of the natural environment and the species that inhabit it. Aware of the environments and species around it the data ready farm can identify synergies and conflicts and then use that data to find resolutions to conservation, pollution and emissions issues.
Conserve habitats and species Prevent pollution from soil erosion and nutrient leaching Reduce emissions from livestock and management practices
To meet global sustainability goals the data ready farm must link and integrate with the ‘wider’ economic, sociopolitical and legal frameworks. Data from supply chain mechanisms, political policies, and legal and administrative bodies must integrate seamlessly with data from the agricultural and natural environments to meet SDG’s and climate change objectives.
Supply Chains TRASE and the blockchain Legal Frameworks COP22 Objective Political policy Paris Agreement
A Local Data Hub
A farm that is aware of itself, the environment and the markets it supplies has the means to measure it’s sustainability relative to the environment and the markets. However a farm integrated with neighbouring farms can improve it’s sustainability. A locally connected farm has greater resilience and can better manage and share resources, integrate crop and livestock choices, and supply markets more efficiently. A local data hub can connect remote farmers and help to build trust and educate in using and sharing data.
Applications and Databases
To move beyond a simply inventory and into a sustainable data driven future requires the development of applications and databases that compliment the framework. Some such as TRASE already exist but local databases and applications to share data within a comprehensive and structured framework still needs development. [Data Databases and Distributed Networks]
Whilst renewable energy will play a significant role in replacing fossil fuels it cannot replace them entirely. To achieve a zero emissions future the World needs to reduce and be more efficient in the way it uses all energy. Similarly heavy industry such as steel cannot run on renewable energy and whilst carbon capture technology could remove and store CO2, it’s not a solution to all our fossil fuel emissions.
Where fossil fuels cannot be replaced by renewable energy or the CO2 captured and stored, offsets, (mitigation strategies to sequester carbon equivalent to the emissions) could be utilized so that the net effect is zero. Since all the reforestation and afforestation strategies are needed to capture historical CO2 emissions [cant see the woods for the trees ], offset for future fossil fuel burning must look to exploit other measures to sequester carbon, most of which lie within land use and agricultural practices.
Note: The figures below are all approximations that have been derived from easily available data on the internet; data which is itself little more than guess work. The purpose is therefore not to provide quantitative analysis but a qualitative summary; to encapsulate the scale of the problem we face and the potential of a given action to contribute to the solution. Only then we can truly quantify a system.
Soils as Carbon Sinks
Compost, green manure, bio-chars and zero till have all been widely suggested as means for agriculture to mitigate CO2 into soils. Whilst changes in land use and management result in changes in the soils carbon content, climate, soil texture, hydrology and depth are also significant factors in a soils capacity to sequester carbon. Tillage, particularly excessive tillage causes soils to lose carbon and as a general rule the less disturbance a soil has the greater the capacity to store carbon. Thus the greatest natural stores of soil carbon are to be found in the undisturbed soils of forests and grassland, whereas the lowest content is to be found on cultivated. It is within the cultivated soils, 1.5billion ha (11% of the Earths surface) that the capacity to act as carbon sinks chiefly lies.
Compost has a number of significant benefits when used in agriculture [compost science] and can indirectly lower emissions by improving soil structure and reducing tractor fuel consumption, and by improving nutrient cycling and reducing fertiliser inputs. However as a means to sequester carbon it’s benefits may be minimal.
Microbial action on the compost, action that is responsible for improving the soil structure and the nutrient cycling is similarly using the compost as an energy source and is respiring in the process. The natural process by which microbes breaks down the organic matter, releasing the nutrients also releases the carbon as CO2 back into the atmosphere. The process follows a first order kinetics curve where 50% remains after three years, 25% after six, 12% after twelve 6% after twenty five and 3% after fifty years. Whilst crude approximations the general rule for compost added at regular intervals (1-3 years) is that it achieves a net balance of soil organic carbon after 50 years. Further additions simply maintain that level.
For an agricultural mineral soil SOC cannot be raised above 10%, but with the current global average likely being as low as 4% there lies a potential to increase soil carbon through compost additions. As a tool to sequester carbon, compost could potentially sink 135 billion tons of carbon over a 50 year period and raise mean SOC levels to 10%. Equivalent to 2.7 billion tons of carbon a year. It would though require 135 billion tons of compost; equivalent to 19 tons per person. It is therefore an unrealistic figure and would likely still be being ambitious with a target of 7.2 billion tons (one ton for each of us). As a target it would similarly offset less than ½ billion tons of Carbon emissions a year. So whilst compost is an important component in achieving sustainability in agriculture and can indirectly reduce farm emissions it is not a means to directly offset fossil fuel emissions from other industries.
Chars (Biochar and Charcoal)
Unlike compost, chars (biochar and charcoal) can be made from any carbon containing material including animal carcasses and plastics. Produced by pyrolysis (heating the material in a reduced oxygen environment) the carbon is converted to a more inert mineral like substrate that does not particularly interact with the soil matrix. Behaving more like aggregate(stone) Chars are highly resistant to microbial action and can remain unchanged for a 100 years or more. Chars do not in themselves contain or contribute any nutrients but with properties similar to vermiculite chars can improve soil structure, influence nutrient cycling, soil acidity and soil hydrology. Chars can likely be added to match or exceed the soils natural carbon balance without affecting the soils ability to function and may actually improve it. This gives a theoretical offset value of at least 100 billion tons of carbon as char into the worlds cultivated soils.
Chars could also be incorporated into soil prior to afforested. With the World needing to create at least 10 million ha of new woodland every year for the next 30 years as much as 7 billion tons, a quarter of a billion a year of carbon as chars could be added to soils prior to planting woodland.
Precisely where the World gets 100 billion tons plus of char from without chopping down a rain forest is another matter but as a one time means to sequester carbon and offset emissions whilst we build the renewable replacements, Chars offer great potential. However put into context 100 billion tons of carbon is the amount of emissions produced from fossil fuels over the last three years; so we need to hurry up with building the replacements.
More a mechanism for improving soil structure and reducing nutrient losses through better cycling green manuring adds some carbon to the soil. However the carbon is not resistant and is subsequently liberated through microbial action. As with the fuel savings and nutrient cycling of compost, green manures provide similar benefits which ultimately reduce the gross CO2 emissions of a farm unit but does not in itself sequester significant amounts of carbon.
As much a political argument as an environmental, zero till as a management strategy can sequester carbon. However its a strategy that cannot be used in conjunction with compost and green manures and so relies on higher fertiliser and herbicide use. This results in increased N2O emissions which largely cancel out the benefits of the carbon sequestered. So whilst there may be other benefits to zero till, such as fuel savings there is likely very little gain from the carbon sequestrated.
Restoring these wetlands could prove easier and sequester twice the carbon that would be sequestered by afforesting the same area. With Europe having lost 66% of it’s wetlands over the last 100 years and the USA 53% since the 1600 there lies the potential to sequester several hundred million tons of carbon in restoring North America and Europe’s wetlands. The global potential could well be in excess of 2000 million tons of carbon. It is worth bearing in mind that the conditions which led to the formation of fossil fuels in the first place, particularly coal, was a planet dominated by swamp forests. The carbon we have released over the last 100 years was originally captured and stored by the wetlands of the carboniferous period.
In addition to restoration creating new bogs, swamps salt marshes and coastal lagoons could further sequester large amounts of carbon, create habitat and provide coastal and flood protection from future sea level rises.
Biomass and Bio-gas
Biomass be it wood, straw or cow dung when burned produces GHG emissions. Those emissions may not be from fossil fuels but they are emissions non the less. Whilst research suggest that over the long term biomass results in net zero emissions, in the short term they may actually add to the problem.
Biomass fuels are also controversial since they require re-purposing of crops and cropland to grow the biomass. For every ha of biomass grown a ha of food land is not. Furthermore biomass used for fuel is biomass that cannot be used for char manufacture and as chars offer some offset value any diversion of biomass to energy production impacts on that potential.
Biogas, where manures and other organic wastes are used in anaerobic digesters to produce methane has the potential to reduce the reliance on fossil fuels but not GHG emissions. Biogas does not reduce emissions, it replaces fossil fuels with methane, it similarly relies on passive uptake to offset those emissions and thus maintains the net emissions total. Unless carbon capture technology is subsequently applied this gives only an efficiency gain, one that to make the manure ‘mineable’ requires animals to be intensively reared in sheds.
Bio-diesel: Tree Bourne Oil Seed
Bio-diesel can be made from any oil bearing seed including crop plants such as sunflower or maize or from high oil bearing non-edible species such as Jatropha. However as with biomass the re-purposing of crops and lands to produce bio-diesel is controversial. Jatropha, and other shrub and tree bourne oil seeds (TBO’s) could however be grown as hedgerow without compromising crop lands.
Grown as hedgerow TBO’s would further provide carbon sequestration, erosion protection and habitat creation whilst having minimal impact on the lands ability to produce crops. This is not so much an offset but a true net zero emission replacement for fossil-fuels: the oil being a commercial component in a hedge that sequesters carbon, prevents soil erosion and provides habitat.
Such a system could even grow bio-diesel for another industry such as International shipping which emits 0.6 billion tonnes of CO2 (1.74% of global emissions) per year from burning 0.2 billion tonnes of heavy fuel oil [fossil fuels result in 3.15 times CO2 when burned] Switching to or blending bio-diesel with heavy fuel oil would directly reduce fossil fuel emissions.
However to grow sufficient Jatropha to supply the current international shipping with 0.2 billion tonnes of bio-diesel would require 35 million ha of land (2.3% of cultivated land), an area the size of Germany.
International aviation, which similarly contributes 0.5 billion tons of CO2 using a slightly more refined fuel (kerosene) than Heavy Fuel oil could also potentially switch to bio-diesel. Aviation fuel (kerosene) has a lower wax content to prevent it solidifying at altitude (low temperature) whereas organic oils tend to solidify. If a bio diesel that can remain liquid at low temperatures could be found it potentially replace aviation fuel but as with shipping it would require an area of land covering 28 million ha, 1.8% of cultivated land, to grow it.
With all reforestation and afforestation projects for the next 100 years set aside to capture historical carbon, only chars and wetland restoration remain as strategies that offer any significant opportunities for carbon offsets on agricultural land. Whilst only bio diesel, derived from non crop plants on non crop land, can be considered a renewable bio-fuel with the potential to replace some fossil fuel use. Biomass production, it’s larger more comprehensive cousin competes with both food and forest crops for land and with Chars for raw material. We cannot both burn our waste wood and char it. Similarly whilst bio-gas makes an efficient use of methane emissions it converts those emissions to CO2. So whilst biogas extracts some energy from GHG’s on route to the atmosphere it is not an offset nor a replacement but just a slower path to the same climate catastrophe.
The use of compost, green manures and reduced or conservation tillage practices whilst in themselves do not sequester significant quantities of long term carbon, when part of a comprehensive management strategy such practices can have positive impacts on fuel consumption and fertilizer use and thus reduce the overall carbon footprint.
Agriculture is itself a major source of GHG emissions and whilst forestry is the largest contributor, land, crop, livestock and management practices all contribute to GHG emissions. These too need to be mitigated and offset.
Agricultural machinery, processing and transportation of goods are all major contributors to the carbon footprint of an agricultural enterprise and it’s produce. The emissions from a farm are though complex, some such as fuel and fertilizer use can be aggregated and crude approximations on how the emissions distribute according to crop and land can be made.
However the mechanisms by which emissions specifically accumulate onto goods within the farm and as those goods pass through subsequent supply chains is not transparent. So whilst figures may be accurate at the national and international level they may not reflect what is actually happening within different units. This has the consequence of shifting responsibility onto the whole industry, masking the extent higher emitters contribute and failing to acknowledge the efforts of low. So whilst it is possible to calculate global fossil fuel production, and to approximate the net emissions resulting from those fuels by country and per capita [the global carbon footprint]; and to further break this down by industry, it is difficult to extract meaningful information to differentiate between high and low emitters within those industries. What is therefore needed is a mechanism, a framework that allows all the data to be quantified to reflect the true carbon footprint of any enterprise or goods at any scale and relevant to the whole. A framework such as DFM.
“Since 1751 approximately 392 billion metric tonnes of carbon have been released to the atmosphere from the consumption of fossil fuels and cement production. Half of these fossil-fuel CO2 emissions have occurred since the mid 1980s.” [Carbon dioxide information analysis centre]
It’s widely reported that the World emitted another 38.2 billion tonnes of CO2 in 2015, an 8% increase on 2014’s 35.6 billion tonnes, raising the global average of CO2 from fossil fuel burning to 5.3 tonnes per person and adding another 10% to the 392 billion tonnes released over the last 200 years.
In 2014 the top five CO2 emitting nations, responsible for 2/3rds of all fossil fuel carbon emissions, were China (10.5bt), USA (5.3bt), European Union (3.4bt), India (2.3bt) and Russia (1.7bt).
Whilst five of the biggest emitters relative to population were Gulf states: Qatar (39.13t), Kuwait (28.33t), United Arab Emirates (21.3t) Oman (18.92t) and Saudi Arabia (16.8t).
Australia (17.3t), The USA (16.5t) and Canada (15.9t) came next whilst Kazakhstan (14.2t) and Russia (12.4t) came 9th and 10th in per capita emissions.
China in 20th (7.6t) whilst the European Union, which came 23rd (6.7t) carried some big emitters such as the Netherlands (9.4t), Germany (9.3t), Belgium (8.7t) and Poland (7.8t). Only Spain and France’s emissions matched the global average of 5 ton. India, the 4th largest emitter by country, produced only 1.8t per head putting it in 42nd place, 5th from bottom and beaten only by Indonesia, Philippines, Pakistan and Nigeria. [wikipedia]
So whilst the industrialised nations are the principal emitters of CO2 from fossil fuels, the residents of the Gulf states have a bigger carbon footprint than any other geographical region. Qataris in particular have 2½ times the carbon footprint of American’s and 43 times that of Pakistanis.
Land Use and Management
Land use changes, in particular deforestation, where 2/3rd’s occurs to supply just five global commodities [cant see the woods for the trees], contributes a further 6.5 billion tonnes (11%) to global GHG emissions. Methane from livestock contributes a further 16% whilst Nitrous Oxide from fertilizer use contributes another 6%. [EPA]
So whilst fossil fuel burning (65%) remains the main contributor to GHG emissions, land use changes and management practices are responsible for 33% of global GHG emissions. When added together, fossil fuels and land use GHG emissions raise the average global carbon footprint to 7 ton per person per year.
If global emissions continue to increase at 7% per year, as developing nations catch up on the industrialized; then by 2020, when most nations expect to start implementing the Paris agreement, global emissions will be at 65 billion tons a year and global per capita footprints at 8.5t per person. We will have added another 250 billion tons of carbon to the atmosphere pushing the planet towards if not over the long term temperature goal of Article 2:
“holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels,”
Article 4 of the Paris agreement states that “In order to achieve the long-term temperature goal set out in Article 2, Parties aim to reach global peaking of greenhouse gas emissions as soon as possible.” However even if the World went on to reduce it’s net emissions to zero by 2050, it will have added a further 900 billion tons of CO2 to the atmosphere in the process: sufficient to double atmospheric concentrations and cause a mean global temperature rise of 3-6 degrees Celsius by the end of the century. Such a scenario would lead to accelerated melting of the Worlds glaciers and cause a global sea level rise in the tens of metres. It is similarly, based on the evidence to date, the most likely scenario.
To avoid catastrophic climate change the World must significantly reduce the use of fossil fuels with immediate effect. It must develop and deploy, on a grand scale, alternative renewable solutions (wind, solar, water, biomass) and it must actively restore the tropical forests lost since 1990. It must further increase global forest extent by at least another 10% and it must continue to do so for the rest of this century and beyond.
It must do this because it did not act to arrest the problem 30 years ago and if it waits another 30 it will be too late.. Admittedly climate was not on the international agenda but deforestation, habitat loss and species extinction were [Our Common Future 1987]. Despite the warnings of the Bruntland report and the the subsequent 1992 Rio Declaration from Earth Summit I, over a third of global deforestation has occurred during this period and the World is on the brink of the biggest extinction event since the dinosaurs. What to do?
A 75% reduction in fossil fuel use
If we cut our current fossil fuel emissions (38.2 billion tons CO2) to reach a net emissions target of 10 billion tons CO2 by 2030: a global carbon footprint of < 1.5t per capita. This would result in the emission of 200 billion tons of CO2, and we might meet the Paris agreement threshold of 2 degrees C. If we take till 2040 to reach the baseline of 10 billion tons then 350 billion tons of CO2 emissions will result and we will likely miss the 2 degree threshold. Wait till 2050 however and 500 billion tons of CO2 emissions will occur, we will not be able to keep global mean temperate increase below 2 degrees C and in all probability it will have already reached and exceeded that by 2050. We will still need strategies to mitigate the 10 billion tons we produce as a base line as well as the 400 billion historical emissions but the longer we delay, the more difficult and painful it will become.
The largest single cause of CO2 emissions is tropical forest deforestation. The UNFAO estimates that in the last 27 years 1.29 million km2 of forest, an area equivalent to France, Spain and Portugal combined, has been lost. Much of this forest has been felled to grow soybean, palm oil and beef to supply global markets.
An end to deforestation would thus dramatically reduce the CO2 contribution of Agriculture bringing it down to or even below a billion tons per year. A 50% reduction in the beef and dairy industries would similarly reduce agricultural GHG emissions by another 8%.
Together with reducing Fossil fuel emissions to 10 billion tons these measures would bring global GHG emissions to under 25 billion tons. We would still be adding to atmospheric concentrations of GHG’s, but at half the rate we are now, so still sufficient to maintain our course towards a climate catastrophe. A catastrophe we cannot avoid until we reduce our net emissions to zero and take steps to recapture and store the 400 billion tons of historical carbon emissions.
Reforestation and Afforestation
To recover the 129 million hectares of forest lost since 1990 in the same number of years would require planting a new forest the size of Switzerland every year for the next thirty. A forest that would, in 100 years, recapture most of the carbon emission from the original deforestation (200 billion tons).
Afforestation, the creation of new forests on agricultural and other land would also capture some of the carbon emissions derived from the burning of fossil fuels during the same period. However as afforestation is not as effective as reforestation a plantation of 260 million hectares, the size of the Kazakhstan, would be requires to capture the same 200 billion tons of CO2.
Over a 100 year period these two forests projects, which collectively would cover an area the size of India and Pakistan, would recapture the 400 billion tons of historical CO2 emissions.
However even with these massive mitigation projects in place, a 75% reduction in fossil fuel emissions and a 50% reduction in methane emissions (an outcome that requires 50% of the World that is not vegetarian to become so) by 2030, the World will still go on to produce 1.5 trillion tons of GHG emissions during the 21st Century.
This is the reality of our 21st Century Global Carbon Footprint.
Climate Catastrophe Time Line
2030…. The World has cut its fossil-fuel emissions by 75% to 10 billion tons a year. Similarly 50% of the World’s meat eaters have become vegetarian and we have stopped all deforestation and bought all commercial forests into zero carbon management: our emissions are down to 20 billion tons a year…
We will have reforested enough of the tropics to cover Portugal and the Spanish region of Galicia and similarly planted new forests across the World’s other regions which would collectively cover Germany. However none are established sufficiently to make any significant contribution to mitigation, so the global temperature is still rising and we are, despite these efforts, 50% worse off than when we started: we now have 600 billion tons of carbon in the atmosphere to deal with and are adding another 20 billion every year.
2040… The tropical forests have expanded and now cover the plains of Spain whilst the new forests has grown to cover an area encompassing Germany and Poland. The good news is the earlier forests are now capturing carbon: perhaps 10%, so 4 billion tons of the 40 billion they will eventually capture. The bad news is global emissions now top 800 billion tons.
2050… Despite having now reforested the tropics with sufficient wood to cover Portugal and Spain, we are but halfway there, there is still the equivalent of France to plant before we recover what was lost in the last two decades of the 20th century. Similarly the new forests now cover an area that swallows the Czech Republic, Slovenia, Hungary and Austria. As with the tropical forest we are still not there, there is still Belarus, Ukraine and Romania to plant.
If our fossil fuel emissions are still at 10 billion tons a year then another 100 billion tons of CO2 will have been added to the 800 billion already in the atmosphere. If meat eaters are still insisting they would rather eat their bacon than save it, then they will have similarly added another 100 billion tons in the form of methane. On the plus side the first of the forests will likely have captured 25% (10 billion tons) of their carbon potential and the second 10% ( 4 billion tons), so our efforts may have been sufficient to keep the total global carbon emissions from fossil fuel burning and deforestation below the one trillion tons mark but by 2099 another 500 billion tons of GHG’s will have been added to the atmosphere.
The End of the Fossil Fuel age
However; if on reaching 2030 the World has achieved a 75% reduction in fossil fuel emissions, adopts a zero emissions target for the next ten years, and similarly a vegetarian diet, then by 2040 the fossil fuel age will come to an end leaving a 700 billion ton carbon footprint on the atmosphere. As long as the World continues with the reforestation and afforestation programs then by the end of the 21st century this could be down to 400 billion tons.. but then pigs might fly
We tend to regard data as if it were a thing with dimensions and boundaries. A product of the information age we live in it travels like the cargo of a ship on the virtual ocean that is the information highway; when in fact the cargo, the ship and the information highway are all data, there is only ocean.
This ocean of data drives society, determines national budgets, aids decisions in industry and pigeon holes us into social and economic groups. From the global to the personal level data plays a significant role in all the decision processes of everyone’s life. Processes that if based on poor inaccurate, out of date or misleading data risk making decisions that are equally poor, misleading and out of date.
So, if we are to make good decisions, we need to know the outcomes, the benefits and consequences of our actions on ourselves, our neighbours and our environment. We need to understand the relationship between the macro and the micro, the local and the global and the only way to do that is through the data.
According to some reports we have generated more data in the last five years than in our entire history and each year we generate more. With this explosion in data comes opportunities for improving our decision processes and achieving global sustainability objectives. However with those opportunities come challenges in handling, differentiating and working out just what is and is not useful. For no data, is better than the wrong data. The right data however, despite what Mark Twain would aver, makes for good statistics and good statistics support good decision processes. But what is the ‘right data’ in an information age awash with the stuff.
What Is Data
The internet is data, everything on it and every piece of software on a computer is made up of Data. However in the context herein data has the more ‘narrow‘ scientific definition of
“a set of values or measurements of qualitative or quantitative variables, records or information collected together for reference or analysis,” (Wikipedia)
it is the cargo on our ship…
The contents of a telephone book is an example of data collected for reference. Data that can and is put into databases for analysis. Once entered it can be re-organized and sorted so as to reveal how the names are distributed, measure their frequency and estimate ethnic or social economic distributions. The analysis might reveal odd correlations, trends and anomalies, such as the frequency at which three sixes appear in the telephone numbers of people with double barrel names, that would otherwise be missed. Such anomalies can fuel conspiracies and are examples of statistics being used like a drunk uses a lamp post, more for support than illumination. In truth there is though little one can get from a telephone book other than a telephone number and an address. That’s not to say that data isn’t useful.
Types Of Data
Data categorisation is very much dependent on purpose; there is no single category structure applicable to all. With that in mind I propose four Data‘spheres‘to initially distinguish data types.
A telephone book is just one source of personal data, as is a mailing list, a club membership, a bank account or a tax office receipt. Individually these data sources provide limited information about an individual but contain fields (name,address, etc) that make it easy to link the data so that collectively it documents extensive details about an individuals personal and financial life. Scary stuff and whilst it’s the most precious kind of data it similarly makes up an insignificant fraction of the total data currently held or being generated by the internet.
The state of the nation, the productivity of industry and the movement of goods and services within and between trading entities relies on the supply of good data. The budget, government policy and changes to or creation of new laws all rely on good relevant data. Without it there would be no means to balance the books, to calculate a nations GDP and value it’s currency. However data collection currently lags behind the policy that relies on it. At best the figures are for the previous quarter but more often than not are estimates aggregated together from different sources.
Domestic government policy on health and education as well as changes to and creation of new laws all rely on good data. At the regional level Data determines how policy will be implemented and budgets distributed between schools, policing, refuse collection, etc. National and local government therefore needs quantitative and qualitative data on the demographics, social trends, political, cultural and ethnic identities of the people it serves.
Environmental data includes any lab, field and desktop data from any chemical, physical or biological discipline from the natural sciences. All data relating to Earth and biological disciplines from theoretical particle physics to the applied science of agriculture are forms of Environmental Data.
Non Exclusive Nature Of Data
Within these spheres data can be quantitative/qualitative, spatial/temporal, deterministic/stochastic or combinations there of. The data may similarly be relevant to a few, many or have a lasting or fleeting influence, and whilst most data conforms to the categories above some straddles more than one and all of it interacts with and influences the data in others. So whilst we can can compartmentalize data we can only understand it in the context of the whole.
What Is A Database
A database is an application (program) into which data can be input and organised to provide an indexing system or display statistical information on the data. A simple data set could be a membership list of a golf club. Each entry containing details on a members name, age, address, joining/subscription date and details of their achievements (i.e. handicap, or records held). The database would allow the club to sort the details by any field (name, age, address, joining date, subscription renewal, handicap, etc) and compile simple statistics (i.e. avg age, length of membership) or see who hadn’t paid their subs. A database might store values, charts, tables, files or just the location of the data as with bit torrent file sharing sites or search engines (i.e. google).
Types Of Database
All databases store information, ideally for easy retrieval. What differentiates one from another is the way the data is stored (within the database itself, or links to an external location), where the database is held (central or distributed), and how the data is subsequently accessed (public or private).
Whilst limited and not generally regarded as a true database, a spreadsheet performs all the basic functions of one. MySQL the database in the LAMP (Linux Apache MySQL PHP) stack that drives the internet is an example of a more complex database. A MySQL database stores the content and links to a web sites media. This content is accessed though PHP scripts ( i.e. a Content Management System like WordPress) and then served to the internet by an Apache server built using Linux.
Distributed Hash Table (DHT)
A Distributed Hash Table (DHT) is a database that stores only the location(s) of a file along with a hash value (a unique reference that is the sum of the contents of the file). The hash value stored in the database can then be compared with that of the external file in order to qualify the integrity of the external file. A DHT may also hold data on when the file(s) was added, the last time it was accessed and the total number of calls made to the file. A DHT is a mechanism used for indexing and distributing files across a P2P network.
The bitcoin blockchain solves trust issues for cryptocurrency, but burns a lot of fossil fuel in the process. Although the bitcoin blockchain is referred to as a distributed database, it is more a duplicated ledger with every node maintaining an identical copy of the entire database. All nodes compete to balance the ledger by guessing a hash value; a value that can’t be calculated easily and can only by discovered by brute force. Guessed correctly it balances the entire system, and creates a block. That in a nut shell is the proof of work concept that makes the Bitcoin blockchain secure; A very energy hungry solution to solve an integrity issue with Homo sapiens.
AFramework For Sustainability
In the previous post I summarised a recent technical report by the Open Data Institute (ODI) which raised the need for a “blockchain ecosystem to emerge that mirrored the common LAMP 7 web stack” and was “compatible with the Web we have already.”
Reliable and secure the software that underpins the LAMP stack is, it is now nearly 20 years old and has arguably reached its peak. It has similarly evolved to be better at generating data than dealing with it. It’s good at serving files, not dealing with the information in them, so whilst the evolution of a data stack needs to evolve alongside the existing web structure it will likely be an evolution independent of it. One ‘promising’ data stack identified by the ODI team which met this criteria was “Ethereum as an application layer, BigchainDB as a database layer and the Interplanetary File System (IPFS) as a storage layer”.
Application Database Storage (ADS) Network
Unlike the LAMP stack the data ecosystem is more likely to evolve as a weave of intertwined data streams that converge on nodes that use the data. Similarly with the LAMP stack exchanges between nodes occurs at the server level, in an ADS network exchanges of data would occur in all layers, Application, Database and Storage.
The Application Layer
What makes databases powerful are the scripts, applications, programs and content management systems that use it. Scripts that are similarly responsible for entering data and with the rapid growth in smart appliances and the IoT this data inputting is increasingly becoming automated. How useful all that data turns out to ultimately be will depend as much on the applications that can use the data effectively as on the databases that store and organize it. Once data no longer has a processing value it would be archived, an action that would be performed by an application.
The Database and Storage Layers
Data with different economic, social and environmental relevance, much of it originating from the application layer, is indexed and organized through the database layer before finding its way into the storage layer. There is to a degree some blurring of the lines between these two layers with the database layer being dynamic whilst the storage layer is more for large files, legacy databases, redundant or archived data.
Blockchain As Metronomes In An ADS Network
The main function of a blockchain is to provide an immutable ledger that can be trusted. It’s a property an ADS network can exploit in order to synchronize databases. In particular supply chain auditing on a blockchain would provide a trusted data source for multiple users in a network. Blockchain being the ideal tool with which to build an authentication and tracking system that shadows produce as it moves from farm to fork (strengthening the food chain with a blockchain)
A Manifest Of Global Agricultural Produce
Providing invaluable data to producers, importers, retailers and consumers alike, with an authentication and tracking system on the blockchain the the origin and route produce took to market could be qualified.
Once established a consumer would have access to an audit trail where they would be able to authenticate origin, standards in production or the carbon footprint of food. Detailing the precise route that the produce took from the field to the shelf would give Importers and Retailers insight into double handling, stalling and wastage on route, whilst National and Supranational bodies would have precise data on the production, origin and consumption of agricultural produce. If data be the cargo in an ADS network, supply chain authentication and tracking system is the ship that carries that data.
Sowing The Seeds For Integrated Crop Production And Management Systems
With an authentication and tracking system in place a farmer would be able track in real time how much produce left the farm and reached the intended market. He would be able to see this relative to his neighbour, relative to acreage of a given crop in a region and relative to all the routes that crop took to market. Without having to communicate all farmers in a publicly accessible authentication and tracking system would be exchanging data that would help all of them plan and co-ordinate crop choices and market logistics.
It is a small step for that hub to widen, to encourage integrated crop production and management in farms across a region and improved logistics to tackle over and under production and transport wastage. One more step and farmers could begin to operate in their own regional network not only to produce and supply food but to create co-operatives to allocate resources more amicable or developing integrated fertility programs. My experiment with IRCC Cameroon was an attempt to remotely put such a structure in place.
Supporting The Development Of A Peer To Peer Economy
As well as farmers retailers and consumers could build co-operatives around a supply chain. Orders could be automatically coordinated through logistics operators to find the optimum route, and then tracked to the delivery address. On arrival the order could trigger payment or payments. It’s a future that relies on the establishment of an authentication and tracking system as well as the market places to promote and display the wares.
A good example of a blockchain authentication and tracking system is Deloitte’s ArtTracktive blockchain. Launched in May of this year to “prove the provenance and movements of artwork” the same technology, despite the huge difference in value of the goods, could be used to authenticate and track a hand of bananas from the Caribbean to the corner shop as easily as it can track a basket of fruit from Caravaggio to the Biblioteca Ambrosiana in Milan.
Widening the tracking remit are the London based startups Blockverify and Provenance. A blockchain initiative on the Ethereum platform Provenance currently provides authentication and traceability of bespoke goods . They are similarly actively exploring retail supply chain tracking. Blockverify similarly claim to be able to provide blockchain authentication to the pharmaceutical, luxury goods, diamonds and electronics industries.
Openbazaar, a peer to peer market place, now integrated with IPFS, is a decentralized Amazon/Ebay that charges no fees and uses an escrow system with Bitcoin for payments. Although Openbazaar discourages illicit trade, being a P2P network makes policing that policy difficult. Escrow brings in a new layer of authentication, a layer that would be enhanced and strengthened by an authentication and tracking system.
A decentralised market place using Bitcoin and supply chain tracking on a blockchain would represent the first completely decentralized market place to be created on the web. Whilst not completely ending the Dark Web an authentication and tracking system would address many of the anonymity issues P2P networks and cryptocurrency create by authenticating sender, delivery and recipient. Potentially a mechanism that is better suited to assisting the development of wholesale markets than a P2P reinvention of Yahoo Auctions.