Working to optimize human and planetary conditions
`

Computer and Other Models


Why Models?

A model can be a collection or a set of data (a database), or a set of calculations or algorithms, or a more complex and interactive programmatic construct with iterative powers, feedback loops, multiple inputs and more.  (For more on models, computer models, simulations, see http://en.wikipedia.org/wiki/Model and http://en.wikipedia.org/wiki/Computer_model.)
Models can be built upon a set (or multiple sets) of other models in order to achieve more comprehensive models (these could be called “metamodels” and “supermodels”—words implying increasingly higher-level, or compound, models).

Why do we expect to get somewhere with models? Models are not political, not
ideological, and not emotional; models may be as close to factual as humanity can get
given human limitations. If models are maintained in an “open source” framework in the
sense that their databases and mechanisms are available for all to work with, then they
can be proofed against bias.

The WorldOpt Institute proposes that, by compiling more accurate models and synthesizing these into higher-level models, usefully-predictive metamodels and supermodels can be created to help us optimize our world’s health and sustainability (including the biosphere and human society).

Some Useful Models

Probably the simplest models are databases, assemblies of like-kind data or statistics that while often incomplete (due to their usual inability to include all possible like-kind data), usually represent reality to a greater or lesser degree.  Scrutiny must be applied to ascertain the degree of completeness and fidelity of such data, and whether other models describing alternative or competing processes exist or should be considered.

An example of a simple database model is a recent New York Times study of tax breaks given to a number of corporations by cities and states ( “Billions in tax breaks benefit corporations, not cities, states” ). The study admits that it cannot track all the data; but is comprehensive enough to be a convincing representation, or model, of these costs.

Furthermore the model is incomplete, because there ARE benefits that at least occasionally accrue to the cities and states due to these tax breaks—an issue that also needs study before a conclusion can be drawn about the net benefit to corporations vs. the cities and states.  So with this partial study of just this one subdivision of government economic behavior, we are left with a picture that’s considerably hazy, yet indicative of considerable government incompetence due to the governments’ improper modeling, or even not really modeling at all.  Furthermore, whatever diligence may be applied by governments in such giveaways is unlikely to be tied to any comprehensive analysis of greater societal good.

If a wide sampling of such models in the economic/government financing realm could be read into a high-capacity computer, we may be able to derive a synthesized metamodel of the efficacy of government spending allocations.  (As if we need such a metamodel to intuit that government efficiency and transparency lack rigor!  ….but a computed conclusion should not only be more convincing; but also might provide sufficient semi-detailed conclusions to allow us to make better decisions—and that’s really the goal).  A high-capacity computer such as IBM’s “Watson” (the supercomputer that won the Jeopardy contest in about 2010), or possibly a (SETI-like) assembled network of donated spare computer time could be utilized.

WorldOpt further proposes next, that similar metamodels of other (non-governmental) realms of economic performance, from the “free” economy and from under-the-radar economies, could also be derived.  Finally a synthesis of the metamodels might create a supermodel of the world economy (including models of government-directed spending, of free investments, and of the black markets, to name a few)—and give us a basis on which to begin an optimization process to improve economic efficiency, transparency, and accountability.  What’s described here could be a rolling, or iterative, modeling process at all levels, which would gradually improve the predictive metamodels and supermodels as more models, and more accurate models, are added to the mix.

Of course, models exist in most areas of physical science and biology as well.

A recent article in Earthwatch described the accuracy of competing global warming/weather models (“On Global Warming Models”, 11/18/12) and speaks to the ability to incrementally upgrade our choice of, and accuracy of, models.

The ability to iteratively improve such models implies that if we properly and flexibly construct metamodels and supermodels from a range of models, we will achieve more and more information on optimizing our world.  Ultimately, a super-compilation of the simpler existing models that describe human behavior and the physics and biology of the planet could lead to a worldwide optimization plan.

Further examples of models, with WorldOpt editorial comment, will be added to this page from time to time.


Powered by http://wordpress.org/ and http://www.hqpremiumthemes.com/