These are some thoughts that I considered for book 3 of

possibility thinking explorations in logic and thought and

many of them are probably flawed so the burden of understanding

lies entirely on the reader and gossip is not allowed.

----------------------------------------

This is an unfinished writing and I disclaim all liability.

----------------------------------------

Copyright 2/1/2005 Justin Coslor

Hierarchical Number Theory: Graph Theory Conversions Looking for

patterns in this: Prime odd and even cardinality on the natural number

system (*See diagram). First I listed out the prime numbers all in a

row, separated by commas. Then above them I drew connecting arcs over

top of every other odd prime (of the ordering of primes). Over top of

those I drew an arc over every two of those arcs, sequentially. Then

over top of every sequential pair of those arcs I drew another arc, and

so on. Then I did the same thing below the listing of the numbers, but

this time starting with every other even prime.

Then I sequentially listed out whole lot of natural numbers and did

the same thing to them down below them, except I put both every other

even and every other odd hierarchical ordering of arcs over top of one

another, down below the listing of the natural number system.

Then over top of the that listing of the natural number system I

transposed the hierarchical arc structures from the prime number system;

putting both every other even prime and every other odd prime

hierarchically on top of each other, as I previously described. *Now I

must note that in all of these, in the center of every arc I drew a line

going straight up or down to the center number for that arc. (See

diagram.)

In another example, I took the data, and spread out the numbers all

over the page in an optimal layout, where no no hierarchical lines cross

each other, but the numbers act as nodal terminals where the

hierarchical arches sprout out of. (See Diagram) This made a very

beautiful picture which was very similar to a hypercube that has been

unfolded onto a 2D surface. Graph Theory might be able to be applied to

hierarchical representations that have been re-aligned in this manner,

and in that way axioms from Graph Theory might be able to be translated

into Hierarchical Number Theory.

The center-poles are very significant because when I transposed the

prime number structures onto the natural number system there is a

central non-prime even natural number in the very center directly

between the center-poles of the sequential arc structures of the every

other even prime and every other odd prime of the same hierarchical

level and group number. The incredibly amazing thing is that when

dealing with very large prime numbers, those prime numbers can be

further reduced by representing them as an offset equation of the

central number plus or minus an offset number. The beauty of is, that

the since the central numbers aren't prime, they can be reduced in

parallel as the composite of some prime numbers, that when multiplied

together total that central number; and those prime composite numbers

can be further reduced in parallel by representing each one as their

central number (just like I previously described) plus or minus some

offset number, and so on and so on until you are dealing with very

managably small numbers in a massively parallel computation. The offset

numbers can be similarly crunched down to practically nothing as well.

This very well may solve a large class of N-P completeness problems!!!

Hurray! It could be extremely valuable in encryption, decryption,

heuristics, pattern recognition, random number testing, testing for

primality in the search for new primes, several branches of mathematics

and other hard sciences can benefit from it as well. I discovered pretty

much independently, just playing around with numbers in a coffee shop

one day on 1/31/2005, and elaborated on 2/1/2005, and it was on 2/4/2005

when describing it to a friend who wishes to remain anonymous that I

realized this nifty prime-number crunching technique, a few days after

talking with the Carnegie Mellon University Logic and Computation Grad

Student Seth Casana, actually it was then that I realized that prime

numbers could be represented as an offset equation, and then I figured

out how to reduce the offset equations to sets of smaller and smaller

offset equations. I was showing Seth the diagrams I had drawn and the

patterns in them. He commented that it looked like a Friege lattice or

something. I think After I pointed out the existance of central numbers

in the diagrams Seth told me that sometimes people represent prime

numbers as an offset, and that all he could think of was that they could

be some kind of offset or something. He's a total genius. He's

graduating this year with a thesis on complexity theory and the

philosophy of science. He made a bunch of Flash animations that teach

people epistemology. Copyright 2/1/2005 Justin Coslor Rough draft typed

3/19/2005. This is an entirely new way to perceive of number systems.

It's a way to perceive of them hierarchically. Many mathematical

patterns may ready become apparent for number theorists as larger and

larger maps in this format are drawn and computed. Hopefully some will

be in the prime number system, as perceived through a variety of other

numbering systems and forms of cardinality. (See photos.) Copyright

3/25/2004 Justin Coslor Hierarchical Number Theory Applied to Graph

Theory

When every-other-number numerical hierarchies are converted into

dependency charts and then those dependency charts are generalized and

pattern matched to graphs and partial graphs of problems, number theory

can apply to those problems because the hierarchies are based on the

number line of various cardinalities.

I had fun at Go Club yesterday, and while I was at the gym I thought

of another math invention. It was great. I figured out how to convert a

graph into a numerical hierarchy which is based on the number line, so

number theory can apply to the graph, and do so by pattern matching the

graph to the various graphs that are generated by converting numerical

hierarchical representations of the number line into dependency charts.

I don't know if that will make sense without seeing the diagrams, but

it's something like that. The exciting part is that almost any thing,

concept, game, or situation can be represented as a graph, and now, a

bunch of patterns can be translated into being able to apply to them.

Copyright 1/31/2005 Justin Coslor Odd and Even Prime Cardinality First

twenty primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 37, 41, 43, 47, 53,

59, 61, 67, 71, 73. ------------------- *See the photo of the digram I

drew on the original page. What properties and relations are there

between the odd primes? First ten odd primes: 2, 5, 11, 17, 23, 37, 43,

53, 61, 71. First five odd odd primes: 2, 11, 23, 43, 61. First five odd

even primes: 5, 17, 37, 53, 71. First ten even primes: 3, 7, 13, 19, 29,

41, 47, 59, 67, 73. First five even even primes: 7, 19, 41, 59, 73.

First five even odd primes: 3, 13, 29, 47, 67.

-------------------------- prime^(odd^4) = prime^(odd)^(odd)^(odd)^(odd)

= 2, 61, . . prime^(odd^3) = prime^(odd)^(odd)^(odd) = 2, 23, 43, 61, .

. . prime^(odd^2) = prime^(odd)^(odd) = 2, 11, 23, 43, 61, . . .

prime^(odd) = prime^(odd) = 2, 5, 11, 17, 23, 37, 43, 53, 61, 71, . . .

prime^(odd)^(even) = 5, 17, 37, 53, 71, . . . prime^(even)^(odd) = 3,

13, 29, 47, 67, . . . ---------------------------------- Copyright

6/10/2005 Justin Coslor HOPS: Hierarchical Offset Prefixes

For counting hierarchically, prefix each set by the following

variables: parity, level, and group (group starting number). Then use

that group starting number as the starting position, and count up to the

number from zero placed at that starting position for representation of

a number prior to HOP computation. I need to develop a calculation

method for that representation.

Have a high-level index which lists all of the group starting

numbers for one of the highest rows, then the rest of the number's group

numbers can be derived for any given level above or below it. All

calculations should access this index.

If I was to look for the pattern "55" in a string of numbers, for

example, I might search linearly and copy all two-digit locations that

start with a "5" into a file, along with the memory address of each,

then throw out all instances that don't contain a "5" as the second

digit. That's one common way to search. But for addresses with a log of

digits, such as extremely large numbers, this is impractical and it's

much easier to do hierarchical level math to check for matches. The

simplest way to do it is a hierarchical parity check + level check +

group # check before proceeding to check both parities of every subgroup

on level 1 of that the offset number. The offset begins at zero at the

end of the prefix's group number, and a micro-hierarchy is built out of

that offset. For large numbers, this is much faster than using big

numbers for everything. Example: Imagine the number 123,456,789 on the

number line. We'll call it "N". N = 9 digits in decimal, and many more

digits in binary. In HOP notation, N = parity.level.group.offset. If I

had a comprehensive index of all the group numbers for a bunch of the

levels I could generate a prefix for this # N, and then I'd only have to

work with a tiny number that is the difference between the closest

highest group and the original number, because chances are the numbers I

apply it to are also offset by that prefix or a nearby prefix. The great

part about hierarchical offset prefixes is that it makes every number

very close to every other number because you just have to jump around

from level to level (vertically) and by group to group (horizontally).

I'll need to ask a programmer to make me a program that generates an

index of group numbers on each level, and the program should also be

able to do conversions between decimal numbers and hierarchical offset

prefixes (HOPs). That way there are only four simple equations necessary

to add, subtract, multiply, divide any two HOP numbers: just perform the

proper conversions between the HOPs' parity, levels, groups, and

offsets.

Parity conversions are simple, level conversions are just dealing

with powers of 2, group conversions are just multiples of 2 + 1, and

offset conversions just deal with regular mathematics using small

numbers. Copyright 7/7/2005 Justin Coslor Prime Breakdown Lookup Tables

Make a lookup table of all of the prime numbers in level 1

level.group.offset notation, and calculate values for N levels up from

there for each prime in that same level.group.offset notation using the

level 1 database. 2^n = distance between prime 2^(n + m) and prime 2^(n

+ (m + 1)).

Center numbers are generated by picking another prime on that same

level somehow (I'm not positive how yet), and the number in-between them

is the center number. Center number factoring can be done repeatedly so

that, for example, if you wanted to multiply a million digit number by a

million digit number, you could spread that out into several thousand

small number calculations, and in that way primes can be factored using

center numbers + their offsets.

Also, prime number divisor checking can be done laterally in

parallel by representing each divisor in level.group.offset notation and

then converting the computation into a set of parallel processed center

number prime breakdown calculations, which would be significantly faster

than doing traditional divisor checking, especially for very large

divisors, assuming you have a parallel processor computer at your

disposal, or do distributed computing, and do multiprocessing/multi-

threading on each processor as well. Copyright 10/7/2004 Justin Coslor

Prime divisor-checking in parallel processing pattern search. *I assume

that people have always known this information. Prime Numbers are not:

1. Even --> Add all even numbers to the reject filter. 2. Divisible by

other prime numbers --> Try dividing all numbers on the potentially

prime list by all known primes. 3. Multiples of other prime numbers -->

Parallel process: Map out in parallel multiples of known primes up to a

certain range for the scope of the search field, and add those to the

reject filter for that search scope. When you try to divide numbers on

the potentially prime list, all of those divisions can be done in

parallel where each prime divisor is granted its own process, and

multiple numbers on the potentially prime list for that search scope

(actually all of the potentials) could be divisor-checked in parallel,

where every number on the potentially prime list is granted its own

complete set off parallel processes, where each set contains a separate

parallel process for every known prime. So for less than half of the

numbers in the search scope will initially qualify to make it onto the

potentially prime list for divisor checking. And all of the potentially

prime numbers will need to have their divisor check processes augmented

as more primes are discovered in the search scope. The Sieve of

Eratosthenes says that the search scope is in the range of n^2, where n

is the largest known prime. Multiple search scopes can be running

concurrently as well, and smaller divisor checks will always finish much

sooner than the larger ones (sequentially) for all numbers not already

filtered out. 12/24/2004 Justin Coslor Look for Ways to Merge Prime

Number Perception Algorithms I don't yet understand how the Riemann Zeta

Function works, but it might be compatible with some of the mathematics

I came up with for prime numbers (sequential prime number word list

heuristics, active filtering techniques, and every other number

groupings on the primes and on the natural number system). Maybe there

are lots of other prime number perception algorithms that can also be

used in conjunction with my algorithms. ??? -------------- Try applying

my algorithm for greatly simplifying the representation of large prime

numbers to the Riemann Zeta function. My algorithm reduces the

complexity of the patterns between sequential prime numbers to a fixed

five variable word for each pair of sequential primes, and there are

only 81 possible words in all. So as a result of fixing the pattern

representation language to only look for certain qualities that are in

every sequential prime relationship, rather than having infinite

possibilities and not knowing what to look for, patterns will emerge

after not to long into the computer runtime. These patterns can then be

used to predict the range of the scope of future undiscovered prime

numbers, which simplifies the search for the next prime dramatically,

but even more important than that is that my algorithm reduces the

cardinality complexity (the representation) of each prime number

significantly for all primes past a certain point, so in essence, this

language I've invented is a whole new number system, but I'm not sure

how to run computations on it. . .though it can be used with a search

engine as a cataloging method for dealing with extremely large numbers.

My algorithm is in this format: The Nth prime (in relation to the prime

that came before it) = the prime number nearest to [the midpoint of the

Nth prime, whether it be in the upper half or the lower half] : in

relation to the remainder of that "near-midpoint-prime" when subtracted

from the Nth prime. The biggest part always gets listed to the left of

the smaller part (with a ratio sign separating them), and if for the N-

1th prime if the prime prime part got listed on one side and in the next

if it's on the opposite side we take note of that. Next we find the

difference in the two parts and note if it is positive or negative, even

or odd, and lastly we compare it to the N-1th difference to see if it is

up, down, the same, or if N-1's difference is greater than 1 and N's

difference is 1 then we say it has been "reset". If the difference jumps

from 1 to a larger difference in N's difference we say it's "undo

reset". Also, the difference is the absolute value of the

"near-midpoint-prime" minus the remaining amount between it and the Nth

prime. Now each of these qualities can be represented by one letter and

placed in one of four sequential places (categories) to make a four

character word. Numbers could even be used instead of characters, but

that might confuse people (though not computers). *******************

"Prime Sequence Matcher" (to be made into software) *******************

This whole method is Copyright 10/25/2004 Justin Coslor, or even sooner

(most likely 10/17/2004, since that's when it occurred to me. I thought

of this idea to help the whole world and therefore must copyright it to

ensure that nobody hordes or misuses it. The algorithms behind this

method that I have invented are free for academic use by all United

Nations member nations, for fair good intent only towards everyone.

---------------------------------- Download a list of the first 10,000

prime numbers from the Internet, and consider formating it in EMACS to

look something like this: 12 23 35 47 5 11 6 13 . . . 10,000 ____ and

name that file primelist.txt ----------------------- Write a computer

program in C or Java called "PrimeSequenceMatcher" that generates a file

called "primerelations.txt" in the following format based on

calculations done on each of line of the file "primelist.txt".

primelist.txt->PrimeSequenceMatcher->pri

primerelations.txt 2 3 2:1 diff 1 left, pos, odd, same 3 5 3:2 diff 1

left, pos, even, up 4 7 5:2 diff 3 left, pos, even, same 5 11 7:4 diff 3

LR, neg, even, down(or reset) 6 13 6:5 diff 1 right, pos, even, up(or

undo reset) 7 17 10:7 diff 3 . . . N __ __:__ diff __ For the C program

see pg. 241 to 251 of Kernigan and Ritchie's book, "The C Programming

Language", for functions that might be useful in the program. See the

scans of my journal entries from 10/17/2004, 10/18/2004, and 10/24/2004

for details on the process (*Note, there may be a few errors, and the

paperwork is sort of sloppy for those dates...), and turn it into an

efficient explicit algorithm. **2/22/2005 Update: I wrote out the gist

of the algorithms for the software in my 10/26/2004 journal entry. The

point of the generating the file primerelations.txt is to run the file

through pattern searching algorithms, and build a relational database,

because since the language of the primes's representation in my method

is severely limited, patterns might emerge. Nobody knows whether or not

the patterns will be consistent in predicting the range that the next

primes will be in, but I hope that they will, and it's worth doing the

experiment since that would be a remarkable tool to have discovered. The

patterns may reveal in some cases which is larger: the

nearest-to-midpoint prime or it's corresponding additive part. Where the

sum equals the prime. That would tell you a general range of where the

next prime isn't at. Also the patterns may in some cases have a

predictable "diff" value, which would be immensely valuable in knowing,

so that you can compare it to the values of the prime that came before

it, which would give a fairly close prediction of where the next prime

may lye. By looking at the pattern of the ordering of sentences, we can

possibly tell which side of the ratio sign the nearest-to-midpoint prime

of the next prime we are looking for lies on (and thus know whether it

is in the upper half or the lower half of the search scope). The search

scope for the next prime number is in the range of the largest known

prime squared. We might also be able to in some cases determine how far

from the absolute value of the difference between the nearest-to-

midpoint prime and the prime number we are looking for, that the prime

number that we are looking for is. Copyright 10/26/2004 to 10/27/2004

Justin Coslor I hereby release this idea under The GNU Public License

Agreement (GPL). ************************* Prime Sequence Matcher

Algorithm ************************* (This algorithm is to be turned into

software. See previous journal entries that are related.) Concept

conceived of originally on 10/17/2004 by Justin Coslor Trends in these

sequential prime relation sentences might emerge as lists of these

sentences are formed and parsed for all, or a large chunk of, the known

primes. ------------------------------- The following definitions are

important to know in order to understand the algorithm: nmp = the prime

number nearest to the midpoint of "the Nth prime we are representing

divided by 2" aptnmp = adjacent part of the nmp = prime number we are

representing minus nmp prime/2 = (nmp+aptnmp)/2 = the midpoint of the

prime nmp = (2 * midpoint) - aptnmp aptnmp = (2 * midpoint) - nmp prime

= 2 * midpoint We take notice of whether nmp is greater than, equal to,

or less than aptnmp. diff = |nmp - aptnmp| N prime = nmp:aptnmp or

aptnmp:nmp, diff = |nmp - aptnmp|

___________________________________

| a | b | c | d |

| left | pos | even | up |

| right | neg | odd | down |

| LR | null | | same |

| RL | | | reset |

| | | | undoreset |

-----------------------------------

Each possible word can be abbreviated as a symbolic character or

symbolic digit, so the sentence is shortened to the size of a four

character word or four digit number. *Note: "a" only = "same" when prime

= 2 (.....that is, when N = 1) **Note: If "c" ever = "same", then N is

not prime, so halt. "abcd" has less than or equal to 100 possible

sequential prime relation sentences (SPRS)'s, since the representation

is limited by the algorithms listed below. Generate a list of SPRS's for

all known primes and do pattern matching/search algorithms to look for

trends that will limit the search scope. The algorithms might even

include SPRS orderings recursively. --------------------------------

Here are the rules that govern abcd: If nmp > aptnmp, then a = left. If

nmp < aptnmp, then a = right. If nmp = aptnmp, then a = same. If N - 1's

"a" = left, and N's "a" = right, then set N's "a" = LR. If N - 1's "a" =

right, and N's "a" = left, then set N's a = RL. If N's nmp - (N - 1)'s

nmp > 0, then b = pos. If N's nmp - (N - 1)'s nmp < 0, then b = neg. If

C = same, then b = null. Meaning, if N's nmp - (N-1)'s nmp = 0, then b=

null. If N's nmp - (N-1)'s nmp is an even integer, then c = even. If N's

nmp - (N - 1)'s nmp is an odd integer, then c = odd. If N's diff > (N -

1)'s diff, then d = up. If N's diff < (N - 1)'s diff, then d = down. If

N's diff = (N-1)'s diff, then d = same. If (N - 1)'s diff > 1 and N's

diff = 1, then d = reset. If (N - 1)'s diff = 1 and N's diff > 1, then d

= undoreset. [......But maybe when (N - 1)'s diff and N's diff = either

1 or 3, then d would also = up, or d = down.] If a = left or RL, then N

prime = nmp:aptnmp, diff = |nmp - aptnmp| If a = right or LR, then N

prime = aptnmp:nmp, diff = |nmp - aptnmp| If a = same, then N prime =

nmp:nmp, diff = |nmp - aptnmp|, but only when N prime = N.

----------------------------------- Copyright 10/24/2004 Justin Coslor

Prime number patterns based on a ratio balance of the largest

near-midpoint prime number and the non-prime combinations of factors in

the remainder: An overlay of symmetries describe prime number patterns

based on a ratio balance of the largest near midpoint prime number and

the non-prime combinations of factors in the remainder. This is to cut

down the search space for the next prime number, by guessing at what

range to search the prime in first, using this data.

For instance, we might describe the prime number 67 geometrically by

layering the prime number 31 under the remainder 36, which has the

modulo binary symmetry equivalency of the pattern 2*2*3*3. We always put

the largest number on top in our description, regardless of whether it

is prime or non-prime, because this ordering will be of importance in

our sentence description of that prime.

We describe the sentence in relation to how we described the prime

number that came before it. For instance, we described 61 as 61=31:2*3*5

ratio (the larger composite always goes on the left of the ratio symbol,

because it will be important to note which side the prime number ends up

on), difference of 1 (difference shows how far from the center the

near-mid prime lies. 31-30=1), right->left (this changing of sides is

important to note because it describes which side of the midpoint of the

prime that the nearest-to-midpoint prime lies on or has moved to, in

terms of the ratio symbol) odd same (this describes whether the

nearest-to-midpoint primes of two prime numbers have a difference that

is even, odd, or if they have the same nearest-to-midpoint primes.)

67=2*2*3*3:31 ratio, difference of 5, left->right same undo last reset.

By looking at the pattern in the sentence descriptions (180 possible

sentences), we can tell which side of the ratio sign that the next

prime's nearest-to-midpoint prime lies on, which tells you which half of

the search scope the next prime lies in, which might cut the

computational task in finding the next finding that next prime number in

half or more. A computer program to generate these sentences can be

written for doing the pattern matching. In the prime number 67 example,

the part that says "same", refers to whether the nearest-to- midpoint

primes of two prime numbers have a difference that is even, odd, or if

they have the same nearest-to-midpoint primes. I threw in the "reset to

1" thing just because it probably occurs a lot, then there's also the

infamous "undo-from-last-reset" which it brings the difference from 1

back to where it was previously at. Copyright 10/5/2004 Justin Coslor

Prime Numbers in Geometry continued . . . Modulo Binary I think that if

prime numbers can be expressed geometrically as ratios there might be a

geometric shortcut to determining if a number is prime or maybe

non-prime. Prime numbers can be represented symmetrically, but not with

colored partitions. (*See diagrams.) Here's a new kind of binary code

that I invented, based on the method of partitioning a circle and

alternately coloring and grouping the equiangled symmetrical partitions

of non-prime partition sections. (*Note, since prime numbers don't have

symmetrical equiangled partitions, use the center-number + offset

converted into modulo binary (see my 2/4/2005 idea and the 2/1/2005

diagram I drew for prime odd and even cardinality and data compression

on the prime numbers)). Modulo binary: *Based on geometric symmetry

ratios. **I may not have been very consistent with my numbering scheme

here, but you should be in final draft version. 1=1 2=11 3=111 4=1010

5=11111 6=110110 or 101010 7=1111111 8=10101010 or 11101110 9=110110110

10=1010101010 11=11111111111 12=110110110110 13=1111111111111

14=10101010101010 15=10110,10110,10110 16=1010,1010,1010,1010 Find a

better way of doing this that might incorporate my prime center number +

offset representation of the primes and non-primes. This is an entirely

new way of counting, so try to make it scalable, and calculatable.

Secondary Levels of Modulo Binary: (*This is just experimental. . .I

based these secondary levels on the first level numbers that are

multiples of these.) 0=00 1=1 2=10 3=110 4=2+2=1010 5=10110 6=3+3=110110

or 111000 or 101101 7= 8=4+4=10101010 9=3+3+3=110110110 10=1010101010

11= 12=3+3+3+3=110110110110 13= 14=10101010101010101010

15=5+5+5=101101011010110 16=4+4+4+4=1010101010101010 Draw a 49 section

and 56 section circle, and look for symmetries to figure out how best to

represent the number 7 in the secondary layer of modulo binary. There

needs to be a stop bit too. Maybe 00 or something, and always start

numbers with a 1. The numbers on through ten should be sufficient for

converting partially from base 10. Where calculations would still be

done in base 10, but using modulo binary representations of each digit.

For encryption obfuscation and stuff. It seems that for even numbers,

the half-circle symmetries rotate between 0,0 across the circle for

numbers that are odd when divided by two, and the numbers that are odd

when divided by two have alternate-half 0,0 symmetry. But numbers that

are prime when divided by two have middle- across 0,1 symmetry.

Copyright 9/30/2004 Justin Coslor Prime Numbers in Geometry *Turn this

idea into a Design Science paper entitled "Patterns in prime composite

partition coloring structures". In the paper, relate these discoveries

to the periodic table. (All prime numbers can be represented as unique

symmetries in Geometry.) 1/1 = 0 division lines 1/2 = 1 division lines

1/3 = 3 division lines 1/4 = 2 division lines 1/5 = 5 division lines 1/6

= 5 division lines = one 1/2 division line and two 1/3 division lines on

each half circle. 1/7 = 7 division lines 1/8 = 4 division lines 1/9 =

_____ division lines . . . Or maybe count by partition sections rather

than division lines. . . How do I write an algorithm or computer program

that counts how many division lines there are in a symmetrically

equiangled partitioning of a circle, where if two division lines that

meet in the middle (as all division lines do) form a straight line they

would only count as one line and not two? Generate a sequential list of

values to find their number of division lines, and see if there is any

pattern in the non-prime division line numbers (i.e. 1/4, 1/6, 1/8, 1/9,

1/10, 1/12, ...) that might be able to be related to the process of

determining or discovering which divisions are prime, or the sequence of

the prime numbers (1/2, 1/3, 1/5, 1/7, 1/11, 1/13, 1/17, ...). 10/5/2004

Justin Coslor As it turns out, there is a pattern in the non-prime

division lines that partition a circle. The equiangled symmetry

partition patterns look like stacks of prime composites layered on top

of one another like the Tower of Hanoi computer game, where each layer's

non-prime symmetry pattern can be colored using it's own colors in an

on-off configuration around the circle (See diagrams.). Prime layers

can't be colored in an on-off pattern symmetrically if the partitions

remain equiangled, because there would be two adjacent partitions

somewhere in the circle of the same color, and that's not symmetrical.

Copyright 7/25/2005 Justin Coslor Geometry of the Numberline: Pictograms

and Polygons. (See diagrams)

Obtain a list of sequential prime numbers. Then draw a pictogram

chart for each number on graph paper, with the base 10 digits 1 through

10 on the Y-axis, and on the X-axis of each pictogram the first column

is the 1's column, the second column is the 10's column, the third

columns is the 100's column, etc. Then plot the points for each digit of

the prime number you're representing, and connect the lines

sequentially. That pictogram is then the exact unique base-10

geometrical representation of that particular prime number (and it can

be done for non-prime numbers too). Another way to make the pictogram

for a number is to plot the points as described, but then connect the

points to form a maximum surface area polygon, because when you do that,

that unique polygon exactly describes that particular number when it's

listed in its original orientation. inside the base-10 graph paper

border that uses the minimum amount of X-axis boxes necessary to convey

the picture, and pictograms are always bordered on the canvas 10 boxes

high in base 10. Other bases can be used too for different sets of

pictograms. What does the pictogram for a given number look like in

other bases? We can connect the dots to make a polygon too, that is

exactly the specific representation in its proper orientation of that

particular unique number represented in that base. Also I wonder what

the pictograms and polygon pictograms look like when represented in

polar coordinates?

These pictogram patterns might show up a lot in nature and artwork,

and it'd be interesting to do a mathematical study of photos and

artwork, where each polygon that matches gets bordered by the border of

it's particular matching pictogram polygon in whatever base it happens

to be in, and pictures might be representable as layers of these

numerical pictograms, spread out all over the canvas overlapping and

all, and maybe partially hidden for some. You could in that way make a

coordinate system in which to calculate the positions and layerings of

the numerical pictograms that show up within the border of the photo or

frame of the artwork, and it could even be a form of steganometry when

intentionally layered into photos and artwork, for cryptography and art.

Summing multiple vertexes of numerical polygon pictograms could also

be used as a technique that would be useful for surjectively distorting

sums of large numbers. That too might have applications in cryptography

and computer vector artwork.

See the diagram of the base 10 polar coordinate pictogram

representation of the number 13,063. With polar notation, as with

Cartesian Coordinate System notation of the pictograms, it's important

to note where the reference point is, and what base it's in, and whether

it's on a polar coordinate system or Cartesian Coordinate System. In

polar coordinates, you need to know where the center point is in

relation to the polygon. . .no I'm wrong, it can be calculated s long as

no vertexes lie in a line. In all polygon representations, the edge

needs to touch all vertexes. Copyright 7/27/2005 Justin Coslor Combining

level.group.offset hierarchical representation with base N pictogram

representation of numbers (See diagrams)

level.group offset notation is (baseN^level)*group+offset Pictogram

notation is as described previously.

If you take the pictogram shape out of context and orient it

differently it could mean a lot of different things, but if you know the

orientation (you can calculate the spacing of the vertexes in different

orientations to find the correct orientation, but you know must also

know what base the number is in to begin with) then you can decipher

what number the polygon represents. You must know what the base is

because it could be of an enormous base. . .you must also know an anchor

point for lining it up with the XY border of the number line context in

that base because it could be a number shape floating inside a enormous

base for all anyone knows, with that anchor point. Also, multiple

numbers on the same straight line can be confusing unless they are

clearly marked as vertexes. If multiple polygons are intersecting, then

they could represent a matrix equation of all of those numbers. Or if

there are three or four polygons connected to each other by a line or a

single vertex, then the three pictograms might represent the three or

four parts of a large or small level.group.offset number in a particular

base. Pictograms connected in level.group offset notation would still

need to be independently rotated into their correct orientation, and

you'd need to know their anchor points and base, but you could very

simply represent an unfathomably enormous number that way in just a tiny

little drawing. Also, numbers might represent words in a dictionary or

letters of an alphabet. This is literally the most concise way to

represent unfathomably enormous numbers that possibly anyone has ever

imagined. Ever. You could write a computer program that would draw and

randomize these drawings as a translation from a dictionary/language set

and word processor document. They could decoded in the reverse process

by people who know the anchor point keys and base keys for each polygon.

You can make the drawings as a subtle off-white color blended into the

white part of the background of a picture, and transmit enormous

documents as a single tiny little picture that just needs some

calculating and keys to decode. Different polygon pictograms, which each

could represent a string of numbers, which can be partitioned into

sections that each represents a word or character, could each be drawn

in a different color. So polygons that are in different colors and

different layers in a haphazard stack, could be organized, where the

color of multiple polygons, means they are part of the same document

string, and the layering of the polygons indicates the order that the

documents are to be read in. Copyright 7/28/2005 Justin Coslor Optimal

Data Compression: Geometric Numberline Pictograms

If each polygon is represented using a different color, you don't

even need to draw the lines that connect the vertexes, so that you can

cram as many polygons as possible onto the canvas. In each polygon, the

number of vertexes is the number of digits in whatever base it's being

represented in. Large bases will mean larger image dimensions, but will

allow for really small representations of large numbers. Ideally one

should only use a particular color on one polygon once. For optimal

representation, one should represent each number in a base that is as

close to the number of digits in that base as possible. If you always do

that, then you won't have to know what base the polygon is represented

in to begin with (because it can be calculated). However, you will still

need to know the starting vertex or another anchor point to figure out

which orientation the polygon is to be perceived of in. On polar

coordinate polygon pictograms, you will just need to know the center

point and a reference point such as where the zero mark is, as well as

what base the polygon is represented in (in most cases). Hierarchical

level.group.offset data compression techniques or other data compression

techniques can also be used. Copyright 7/24/2005 Justin Coslor Prime

Inversion Charts (See diagram) Make a conversion list of the sequential

prime numbers, where each number (prime 1 through the N'th prime) is

inverted so that the least significant digit is now the most significant

digit, and the most significant digit is now the least significant digit

(ones column stays in the ones column, but the 10's column gets put in

the 10ths column on the other side of the decimal point, same with

hundreds, etc.). So you have a graph that goes from 0 through 10 on the

Y-axis, and 0 through N along the X axis, and you just plot the points

for prime 1 through the N'th prime and connect the dots sequentially.

Also, you can convert this into a binary string by making it so that if

any prime is higher up on the Y-axis than the prime before it, it

becomes a 1, and if it is less than the prime before it, it becomes a 0.

Then you can look for patterns in that. I noticed many recurring binary

string patterns in that sequence, as well as many pallendrome string

patterns in that representation (and I only looked at the first couple

of numbers, so there might be something to it). 10/8/2004 Justin Coslor

Classical Algebra (textbook notes) Pg. 157 of Classical Algebra fourth

edition says: The Prime Number Theorem: In the interval of 1 through X,

there are about X/LOGeX primes in this interval. P=X/LOGeX scope: (1,X)

or something. The book claims that they cannot factor 200 digit primes

yet. In 1999 Nayan Hajratwala found a record new prime 2^6972593 - 1

with his PC. It's a Mersenne Prime over 2 million digits long. This book

deals a lot with encryption. I believe that nothing is 100% secure

except for the potential for a delay. On pg. 39 it says "There is no

known efficient procedure for finding prime numbers." On pg. 157 it

directly contradicts that statement by saying: "There are efficient

methods for finding very large prime numbers." The process I described

in my 10/7/2004 journal entryis like the sieve of Eratosthenes, except

my method goes a step farther in making a continuously augmented filter

list of divisor multiplicants not to bother checking, while

simultaneously running the Sieve of Eratosthenes in a massive

synchronously parallel computational process. Prime numbers are useful

for use in pattern search algorithms that operate in abductive and

deductive reasoning engines (systems), which can be used to explore and

grow and help solve problems and provide new opportunities and to invent

things and do science simulations far beyond human capability. (Pg. 40)

Theorem: An integer x>1 is either prime or contains a prime factor

<=sqrt(x). Proof: x=ab where a and b are positive integers between 1 and

x. Since P is the smallest prime factor, a>=p, b>=p and x=ab>=p^2. Hence

p<=sqrt(x). Example: If x=10 a=2 and b=5. p=3 p^2=9 so 10=2*5>=9. So

factors of x are within the scope of (2, sqrt(x)) or else it's prime.

a^2>=b^2. x^2>=p^4. x^2/p^4=big. Try converting Fermat's Little Theorem

and other corollaries into geometry symmetries and modulo binary format.

The propositions in Modern Algebra about modulo might only hold for two-

dimensional arithmetic, but if you add a 3rd dimension the rotations are

countable as periods on a spiral, which when viewed from a perpendicular

side-view looks like a 2-dimensional waveform. 9/26/2004 Justin Coslor

Privacy True privacy may not be possible, but the best that we can hope

for is a long enough delay in recognition of observations to have enough

time and patience to put things intot the perspective of a more

understanding context. Copyright 9/17/2004 Justin Coslor A Simple,

Concise, Encryption Syntax. This can be one layer of an encryption, that

can be the foundation of a concise syntax. *Important: The example does

not do this, but in practice, if you plan on using this kind of

encryption more than once, then be sure to generate a random unique

binary string for each letter of the alphabet, and make each X digits

long. Then generate a random binary string that is N times as long as

the length of your message to be sent, and append unique sequential

pieces (of equal length) of this random binary string to the right of

each character's binary representation. The remote parts should have

lots of securely acquired random unique alphabet/random binary string

pairs, such as on a DVD that twas delivered by hand. In long messages,

never use the same alphabet's character(s) more than once but rotate to

the next binary character representation on the DVD sequentially. Here's

the example alphabet (note that you can of course choose your own

alphabetic representation as long as it is logically consistent): a

010101 b 011001 c 011101 d 100001 e 100101 f 101001 g 110001 h 110101 i

111001 --------- j 010110 k 011010 l 011110 m 100010 n 100110 o 101010 p

110010 q 110110 r 111010 --------- s 010111 t 011011 u 011111 v 100011 w

100111 x 101011 y 110011 z 110111 space 111011 ------------------------

EXAMPLE: "peace brother" can be encoded like this using that particular

alphabet:

0110111010010110001011010111001001100101

0101111010101010100001101110110101111001

------------------------ 2/18/2005 Update by Justin Coslor Well, I

forgot how to break my own code. Imagine that! I think it had something

to do with making up a random string that was of a length that is

divisible by the number of letters in the alphabet, yet is of equal

bit-length to the bit-translated message, so that you know how long the

message is, and you know how many bits it takes to represent each

character in the alphabet. Then systematically mix in the random bits

with the bits in the encoded message. In my alphabet I used 27

characters that were each six bits in length; and in my example, my

message was 13 characters long, 11 of which were unique. I seriously

have no idea what I was thinking when I wrote this example, but at least

my alphabet I do understand, and it's pretty concise, and sufficiently

obscured for some purposes. Copyright 6/30/2005 Justin Coslor Automatic

Systems (See Diagram) There is 2D, and there are 3D snapshots

represented in 2D, and there is the model-theory approach of making

graphs and flowcharts, but why not add dimensional metrics to graph

diagrams to represent systems more accurately?

--------------------------------- Atomic Elements -> Mixing pot ->

Distillation/Recombination: A->B->C->D->E -> State Machine Output

Display (Active Graphing = real-time) -> Output Parsing and calculation

of refinements (Empirical) -> Set of contextually adaptive relations:

R1->A, R2->B, R3->C, R4->D, R5->E. -------------------------------------

Copyright 5/11/2005 Justin Coslor How to combine sequences: Draw a set

of Cartesian coordinate system axis, and on the x axis mark off the

points for one sequence, and on the y axis mark off the points for the

sequence you want to combine with it (and if you have three sequences

you want to combine, mark off the third sequence on the z-axis. ...for

more than 3 sequences, use linear algebra). Next draw a box between the

origin and the first point on each sequence; then calculate the length

of the diagonal. Then do the same for the next point in each sequence

and calculate the length of the diagonal. Eventually you will have a

unique sequence that is representative of all of the different sequences

that you combined into one in this manner. For instance, you could

generate a sequence that is the combination of the prime numbers and the

Fibonacci Sequence. In fact, the prime numbers might be a combination of

two or more other sequences in this manner, for all I know. 1/4/2005

Justin Coslor Notes from the book "Connections: The Geometric Bridge

Between Art and Science" + some ideas.

In a meeting with Nehru in India in 1958 he said "The problem of a

comprehensive design science is to isolate specific instances of the

pattern of a general, cosmic energy system and turn these to human use."

The topic of design science was started by architect, designer, and

inventor Buckminster Fuller. The chemical physicist Arthur Loeb, who

considers design science to be the grammar of space. Buy that book, as

well as the book "The Undecidable" by Martin Davis.

Chemist Istvan Hergittai edited two large books on symmetry. He also

edits the journals "symmetry" and "space structures" where I could

submit my paper on the geometry of prime numbers and patterns in

composite partition coloring structures. *Also, send it to Physical

Science Review to solicit scientific applications of my discovery. Send

it to some math journals too. Again, the paper I want to write is called

"Patterns in prime composite partition coloring structures", and it will

be based on that journal entry I had about symmetrically dividing up a

circle into partitions, then labeling the alternating patterns in the

symmetries using individual colors for each primary pattern in the

stack, similar to that game "The Tower of Hanoi". Study the writings of

Thales (Teacher of Pythagoras), who is known as the father of Greek

mathematics, astronomy, and philosophy, and who visited Egypt to learn

its secrets [Turnbull, 1961 "The Great Mathematicians], [Gorman, 1979

Pythagoras - A Life] ---------------------------- Connections page 11.

Figure 1.7 The Ptolemaic scale based on the primes 2, 3, and 5. C=1,

D=8/9, E=4/5, F=3/4, G=2/3, A=3/5, B=8/15, C=1/2.

------------------------- Figure 1.6 The Pythagorean scale derived from

the primes 2 and 3: C=1, space=8/9, D=8/9, space=8/9, E=64/81,

space=243/256, F=3/4, space=8/9, G=2/3, space=8/9, A=16/27, space=8/9,

B=128/243, space=243/256, C'=1/2, space=8/9, D'=4/9, space=8/9,

E'=32/81, space=243/256, F'=3/8, space=8/9, G'=1/3, space=8/9, A'=8/27,

space=8/9, B'=64/243, space=243/256, C"=1/4. ----------------- *1/4/2005

Project:

Someday try writing an electronic music song that makes vivid use of

parallel mathematical algorithms based on the prime numbers, actually

come to think of it, this concept was presented in an episode of Star

Trek Voyager. ---------------------------- 8/26/2004 Justin Coslor Notes

(pg. 1) These are my notes on three papers contributed to the MIT

Encyclopedias of Cognitive Science by Wilfried Sieg in July 1997: Report

CMU-PHIL-79, Philosophy, Methodology, Logic. Pittsburgh, Pennsylvania

15213-3890. - Formal Systems - Church Turing Thesis - Godel's Theorems

-------------------------------- Notes on Wilfried Sieg's "Properties of

Formal Systems" paper: Euclid's Elements -> axiomatic-deductive method.

Formal Systems = "Mechanical" regimentation of the inference steps along

with only syntactic statements described in a precise symbolic language

and a logical calculus, both of which must be recursive (by the

Church-Turing Thesis). Meaning Formal Systems use just the syntax of

symbolic word statements (not their meaning), recursive logical

calculus, and recursive symbolic definitions of each word.

Frege in 1879: "a symbolic language (with relations and

quantifiers)" + an adequate logical calculus -> the means for the

completely formal representation of mathematical proofs. Fregean frame

-> mathematical logic ->Whitehead & Russell's "Principia Mathematica" ->

metamathematical perspective <- Hilbert's "Grundlagen der Geometrie"

1899 *metamathematical perspective -> Hilbert& Bernays "Die Prizipien

der Mathematik" lectures 1917- 1918 -> first order logic = central

language + made a suitable logical calculus. Questions raised:

Completeness, consistency, decidability. Still active. Lots of progress

has been made in these areas since then. **Hilbert & Bernays "Die

Prizipien der Mathematik" lectures 1917-1918 -> mathematical logic.

Kinds of completeness: Quasi-empirical completeness of Zermelo Fraenkel

set theory, syntactic completeness of formal theories, and semantic

completeness = all statements true in all models. - Sentential logic

proved complete by Hilbert and Bernays (1918) and Post (1921). - First

order logic proved complete by Godel (1930). "If every finite subset of

a system has a model, so does the systems." But first order logic has

some non-standard models.

Hilbert's Entsheidungsproblem proved undecidable by Church & Turing.

It was the decision problem for first order logic. So the "decision

problem" proved undecidable, but it lead to recursion theoretic

complexity of sets, which lead to classification of 1. arithmetical, 2.

hyper-arithmetical, and 3. analytical hierarchies. It later lead to

computational complexity classes. So they couldn't prove what could be

decided in first order logic, but they could classify the complexity of

modes of computation using first order logic. ---In first order logic,

one can classify the empirical and computational complexity of syntactic

configurations whose formulas and proofs are effectively decidable by a

Turing Machine. I'm not positive about this next part. ...but, such

syntactic configurations (aka software that eventually halts) are

considered to be formed systems. In other words, ,one cannot classify

the empirical and computational complexity of software that never halts

(or hasn't halted), using first order logic. The Entsheidungsproblem

(First order logic Decision Problem) resulted in model theory, proof

theory, and computability thoery. It required "effective methods" of

decision making to be precisely defined. Or rather, it required

effective methods of characterizing what could or couldn't be decided in

first-order logic.

The proof of the completeness theorem resulted in the relativity of

"being countable" which in turn resulted in the Skolem paradox. ***I

believe that paradoxes only occur when the context of a logic is

incomplete or when it's foundations scope is not broad enough.

Semantic arguments in geometry yielded "Relative Consistency

Proofs". Hilbert used "finitist means" to establish the consistency of

formal systems. Ackerman, von Neumann, and Herbrand used a very

restricted induction principle to establish the consistency of number

theory. Modern proof theory used "constructivist" means to prove

significant parts of analysis. Insights have been gained into the

"normal form" of proofs in sequent and natural deduction calculi. So

they all wanted to map the spectrum of unbreakable reason. Godel firmly

believed that the term "formal system' or 'formalism' should never be

used for anything but software that halts.

------------------------------------- 9/1/2004 Justin Coslor Notes on

Wilfried Sieg's "Church-Turing Thesis" paper:

Church re-defined the term "effective calculable function" (of

positive integers) with the mathematically precise term "recursive

function". Kleen used the term "recursive" in "Introduction to

Metamathematics, in 1952. Turing independently suggested identifying

"effectively calculable functions" as functions whose values can be

computed (mechanically) using a Turing Machine.Turing & Church's theses

were, in effect, equivalent, and so jointly they are referred to as the

Church-Turing Thesis. Metamathematics takes formally presented theories

as objects of mathematical study (Hilbert 1904), and it's been pursued

since the 1920's, which led to precisely characterizing the class of

effective procedures, which led to the Entsheidungsproblem, which was

solved negatively relative to recursion (****but what about for

non-recursive systems?). Metamathematics also led to Godel's

Incompleteness Theorems (1931), which apply to all formal systems, like

type theory of Principia Mathematica or Zermalo-Fraenkel Set Theory,

etc. Effective Computability: So it seems like they all wanted

infallable systtems (formal systems), and the were convinced that the

way to get there required a precise definition of effective

calculability. Church and Kleen thought it was equivalent to

lambda-definability, and later prove that lambda-definability is

equivalent to recursiveness (1935-1936).

Turing thought effective calculability could be defined as anything

that can be calculated on a Turing Machine (1936). Godel defined the

concept of a (general) recursive function using an equational calculus,

but was not convinced that all effectively calculable functions would

fall under it. Post (*my favorite definition...*) in 1936 made a model

that is strikingly similar to Turing's, but didn't provide any analysis

in support of the generality of his model. But Post did suggest

verifying formal theories by investigating ever wider formulations and

reducing them to his basic formulation. He considered this method of

identifying/defining effectively calculable functions as a working

hypothesis.

Post's method is strikingly similar to my friend Andrew J.

Dougherty's thesis of artificial intelligence, which is that at a

certain point, the compactness of a set of functions is maximized

through optimization and at that point, the complexity of their

informational content plateaus, unless you keep adding new functions. So

his solution to Artificial Intelligence is to assimilate all of the

known useful functions in the world, and optimize them to the plateau

point of complexity (put the information in lowest terms), and to then

use that condensed information set/tool in exploring for new functions

to add, so that the rich depth of the problem solving and information

seeking technology can continually improve past any plateau points.

(in 1939) Hilbert and Bernays showed that deductively formalized

functions require that their proof predicates to be primitive recursive.

Such "reconable" functions are recursive and can be evaluated in a very

restricted number of theoretic formalism. Godel emphasized that

provability and definability depend on the formalism considered. Godel

also emphasized that recursiveness or computability have an absoluteness

property not shared by provability or definability, and other

metamathematical notions.

My theory is a bottom-up approach for pattern discovery and adaptive

reconceptualization between the domains of different contexts, and can

provide the theoretical framework for abductive reaasoning, necessary

for the application of my friend Andrew J. Dougherty's thesis. Perhaps

my theories could be abductively formalized? My theories do not require

empiricism (deduction), to produce new elements that are

primitive-recursive to produce new elements that are primitive-recursive

(circular-reasoning-based/symbolic/repet

used in building and calculating statements and structures, that can add

new information. To me, "meaning" implies having an "appreciation" for

the information and functions and relations, at least in part; and that

this "appreciation" is obtained through recognition of the information

(and functions' and relations') utility or relative utility via use or

simulation experience within partially- defined contexts. I say

"partially-defined" contexts because by Godel's Incompleteness Theorems,

an all-encompassing ultimate context cannot be completely defined since

the definition itself (and it's definer would have to be part of that

context, which isn't possible because it would have to be infinitely

recursive and thus never fully representable.

Turing invented a mechanical method for operating symbolically. His

invention's concepts provided the mechanical means for running

simulations. Andrew J. Dougherty and I have created the concepts for

mechanically creating new simulations to run until all possible

simulations that can be created in good intention, that are helpful and

fair for all, exceeds the number of such programs that can be possibly

used in all of existence, in all time frames forever, God willing.

Turing was a uniter not a divider and he demanded immediate

recognizability of symbolic configurations, so that basic computation

steps need not be further subdivided. *But there are limitations in

taking input at face value. Sieg in 19944, inspired by Turing's 1936

paper formulated the following boundedness conditions and locality

limitations of computors: (B.1) there is a fixed bound for the number of

symbolic configurations a computor can immediately recognize; (B.2)

there is a fixed bound for the number of a computor's internal states

that need to be taken into account; -- therefore he can carry out only

finitely many different operations. These operations are restricted by

the following locality conditions: (L.1) only elements of observed

configurations can be changed. (L.2) the computor can shift his

attention from one symbolic configuration to another only if the second

is within a bounded distance from the first. *Humans are capable of more

than just mechanical processes. ---------------------------------- Notes

on Wilfried Sieg's "Godel's Theorems" paper: Kurt Godel established a

number of absolutely essential facts: - completeness of first order

logic - relative consistency of the axiom of choice - generalized

continuum hypothesis - (And relevant to the foundations of mathematics:)

*His two Incompleteness Theorems (a.k.a. Godel's Theorems.

In the early 20th century dramatic development of logic in the

context of deep problems in the foundations in mathematics provided for

the first time the means to reflect mathematical practice in formal

theories. 1. - One question asked was: "Is there a formal theory such

that mathematical truth is co- extensive with provability in that

theory?" (Possibly... See Russell's type theory P of Principia

Mathematica and axiomatic set theory as formulated by Zermelo...) - From

Hilbert's research around 1920 another question emerged: 2. "Is the

consistency of mathematics in its formalized presentation provable by

restricted mathematical, so-called finitist means? *To summarize

informally: 1. Is truth co-extensive with provability? 2. Is consistency

provable by finitist means? Godel proved the second question to be

negative for the case of formalizably finitist means. Godel's

Incompleteness theorems: - If P is consistent (thus recursive), then

there is a sentence sigma in the language of P, such that neither sigma

nor its negation not-sigma is provable in P. Sigma is thus independent

of P. (Is sigma the dohnut hole of reason that fits into the center of

the circular reasoning (into the center of, but independent from the

recursion)?) - If P is consistent, then cons, the statement in the

language of P that expresses the consistency of P, is not provable in P.

Actually Godel's second theorem claims the unprovability of that second

(meta) mathematical meaningful statement noted on pg. 7. Godel's first

incompleteness theorem's purpose is to actually demonstrate that some

syntactically true statements can be semantically false. He possibly did

this to show that formal theories are not adequate by themselves to

fully describe true knowledge, at least with knowledge that is

represented by numbers, that is. It illustrates how it is possible to

lie with numbers. In other words, syntax and semantics are mutually

exclusive, and Godel's second Incompleteness Theorem demonstrates that.

In other words the symbolically representative nature of language makes

it possible to lie and misinterpret.

Godel liked to explain how every consistently formal system that

contains a certain amount of number theory can be rigorously proven to

contain undecidably arithmetical propositions, including proving that

the consistency of systems within such a system is non-demonstratable;

and that this can all be proven using a Turing Machine.

Godel thought "the human mind (even within the realm of pure

mathematics) infinitely surpasses the power of any finite machine."

**But what about massively parallel Quantum supercomputers? Keep in mind

the boundary and limitation conditions that Sieg noted in his

Church-Turing Thesis paper of dimensional minds in relatable

timelines... (Computors). 8/26/2004 Justin Coslor Concepts that I'll

need to study to better understand logic and computation: Readings:

Euclid's Elements Principia Mathematica Completeness: quasi-empirical

completeness, syntactic completeness, semantic completeness consistency

decidability recursion theoretic complexity of sets classification

hierarchies computational complexity classes modes of computation model

theory proof theory computability theory relative consistency proofs

consistency of formal systems consistency of number theory modern proof

theory constructivist proofs semantic arguments in geometry analysis

sequent and natural deduction calculi recursive functions

Metamathematics Type Theory Zermelo-Fraenkel Set Theory effective

computability Lambda-definability investigating ever-wider formulations

primitive recursive proof predicates provability and definability

meaning: [11/11/2004 Justin Coslor -- Meaning depends on goal-subjective

relative utility. In other words, Experience leading up to perspective

filters and perspective relational association buffers.] utility and

relative utility simulation deductively formalized functions boundedness

conditions locality limitations formalizably finitist means choice,

continuum, foundations syntax & semantics incompleteness undecidable

arithmetical propositions hierarchies: arithmetical, hyper-arithmetical

(is hyper-arithmetical where all of the nodes' relations are able to be

retranslated to the perspective of any particular node?), and analytical

hierarchies hierarchical complexity computational complexity Graph

Theory Knowledge Representation Epistemology Pattern Search,

Recognition, Storage, and retrieval Appreciation

----------------------------------------

This is an unfinished writing and I disclaim all liability.

----------------------------------------

## Error

Comments allowed for friends only

Anonymous comments are disabled in this journal

Your reply will be screened