Creationism/Evolution Debate

Discuss literature (e.g. books, newspapers), educational studies (getting help or opinions on homework or an essay), and philosophy.

Unread postby Seven at One Stroke » Tue Jun 03, 2003 8:49 pm

perze wrote:There are too many scientific facts that disproves the theory of evolution ... for one, the law of biogenesis ... simply said as like begets like ... second is the second law of thermodynamics ... or simply entropy ... any matter through time would tend to worsen it's state rather than become better.

Well, perhaps there are scientific facts that disprove the theory of evolution (although I'm not very familiar with many of them), the two that you've mentioned aren't.

Biogenesis is not a law, in fact, it has been disproved in the 19th century. In 1828, chemist Friedrich Wohler discovered that when heating ammonium cyanate, he in fact produced urea, an organic compound. In recent years, simulations of the 'primordial soup' in a reducing atmosphere also created organic molecules. RNA has also shown enzymatic activities that may have accelerated the rate formation of organic molecules. Therefore, it is definitely possible to acquire organic compounds from inorganic material.

The laws of thermodynamics is just that, it is a law of thermodynamics. The second law states that, in an isolated thermodynamic system, the molecules tend to the most probable state. The principle cannot be proven a priori, but it is based on statistical principles. Note that it says nothing about things being 'better' or worse, it is simply a matter of probability. I've repeatedly pointed out that we should always keep natural laws in their respective spheres and not mix them together.

In evolution, there is no such thing as a straightforward 'better', 'worse' or 'purpose', the only deciding factor of whether or not a species will survive is called 'relative fitness'. Of course, this term actually includes a huge variety of factors, but there is no straightforward 'better'. Do humans run faster than cheetahs, are we stronger than bears, can we jump higher (in comparison to body size) than fleas? Physiologically, humans actually have a much lower relative fitness than many other beings. (I feel the urge to quote from King Lear, but I'll refrain.)

Taishi Ziyi wrote:Scientists have proved that man evolved from apes, but Christians have said that God made man in his own image, so does that meen God is a monkey?

I don't believe your statement is quite correct. Most taxonomists would say that "humans and apes both evolved from a common ancestor." Since both humans are apes are extant and both have undergone significant changes since speciation, it is quite incorrect to say 'humans evolved from apes', which is quite similar to saying that 'John evolved from his sister Betty.'
Regarding your question: if you're an evolutionist, you're taking the Bible too literally; if you're a creationist, you're not taking the Bible literally enough. If God did create men in his image, can't you speculate that God CAUSED humans to evolve to what we are today from ape-like ancestors? Remember that God also created all the other animals in the world, as told by the Bible, but it's never specifically mentioned how. If God is truly omnipotent, he has the ability to shift the flow of history of evolution as well.
Moderation in pursuit of actual work is no vice.
User avatar
Seven at One Stroke
Sei's Slave
 
Posts: 1852
Joined: Wed Oct 02, 2002 4:16 am

Re: A.I.

Unread postby Rhiannon » Thu Jun 05, 2003 2:23 am

skyeye84 wrote:I would like to hear everyone's feelings about artificial intelligence.


A.I. can never equate or better natural intelligence. It can be smarter in that it can better understand the logical flow of things, and that it can better store databases of memory. But the one thing that all AI programmers have detested about trying to program a realistic AI is that attempting to program random processes is purely impossible. You cannot program pure randomness. . . everything becomes predictable. AI also tends to have a hard time properly linking its information in accordance to contexts.
"For us to have self-esteem is truly an act of revolution and our revolution is long overdue."
— Margaret Cho
User avatar
Rhiannon
Joy & Oblivion
Joy & Oblivion
 
Posts: 5268
Joined: Sat Nov 23, 2002 10:10 pm

Unread postby Russell » Thu Jun 05, 2003 2:38 am

If the biggest risk is not taking one, then that would make not taking a risk a risk, so wouldn't you not take the original risk of not taking a risk?

And I don't know if this is technically Philosophy, but I always love Oxymorons:

"I see," said the blind man to his deaf daughter.

'We're going to see some real-live ghosts.'
I suppose I should put something here, shouldn't I?
User avatar
Russell
Genuises of the ISS
Genuises of the ISS
 
Posts: 1170
Joined: Sun Jul 28, 2002 11:42 pm
Location: Talking about you behind your back.

Re: A.I.

Unread postby Kymvir Raemiz » Fri Jun 06, 2003 3:22 pm

Wild-Eyes wrote:
skyeye84 wrote:I would like to hear everyone's feelings about artificial intelligence.


A.I. can never equate or better natural intelligence. It can be smarter in that it can better understand the logical flow of things, and that it can better store databases of memory. But the one thing that all AI programmers have detested about trying to program a realistic AI is that attempting to program random processes is purely impossible. You cannot program pure randomness. . . everything becomes predictable. AI also tends to have a hard time properly linking its information in accordance to contexts.


I disagree. A.I. can be developed that could better natural intelligence, it just needs the correct kind of technology. Natural intelligence is responses based off of experiences that are stored in an associative context. Although they are no where near as complex as a human's associative capability, there are already algorithms developed and databases that have been created using an associative method to return information. There are two problems currently with building associative algorithms. One, just like normal human development, the data must be built over time while being fed stimulus. The other problem is creating an interface that would better direct the responses. An associative structure will frequently return more information than needed, and due to the huge amount of information that is being drawn on, it needs a good filter. For example, A human goes to the ballpark as a child, and always buys a hot dog. Later in life, whenever he smells a hotdog, he thinks of the ballpark. The ballpark thought is incidental, the real focus should be that a hotdog is cooking or nearby. An A.I. filter would have to be able to realize that the ballpark is incidental.

Both of these are surmountable problems, and I expect they will be once a better reason to fund the development comes available.
Kymvir Raemiz
Tyro
 
Posts: 7
Joined: Fri Jun 06, 2003 3:08 pm

Unread postby Rhiannon » Fri Jun 06, 2003 5:48 pm

I suppose you're right. My point is, it's an "impossible" challenge to program a brain (intelligence) similiar to or better than the human one, when we already understand so little of how our own brains work. You can't program an unknown function.

Also, why do we pursue having artificial intelligence (particularly human-like) in the first place? There must be a deeper desire to it than having a program that will respond to our physical or mental needs (ie, more than having a smart encyclopedia or a robot that knows how to do your laundry).
"For us to have self-esteem is truly an act of revolution and our revolution is long overdue."
— Margaret Cho
User avatar
Rhiannon
Joy & Oblivion
Joy & Oblivion
 
Posts: 5268
Joined: Sat Nov 23, 2002 10:10 pm

Unread postby timon » Fri Jun 06, 2003 5:56 pm

Wild-Eyes wrote: why do we pursue having artificial intelligence (particularly human-like) in the first place? There must be a deeper desire to it than having a program that will respond to our physical or mental needs (ie, more than having a smart encyclopedia or a robot that knows how to do your laundry).


my personal answer to that would be ... because i am lazy and i don't want to do anything except eat, sleep and play video games.

seriously, man pursues these things with one idea in mind .... convenience.
Give a man a fish and he will eat for a day. Teach a man to fish and he will sit in a boat drinking beer all day.
User avatar
timon
Changshi
 
Posts: 403
Joined: Tue Apr 08, 2003 2:58 pm
Location: in the great expanse of darkness called space

Unread postby Rhiannon » Fri Jun 06, 2003 7:01 pm

perze wrote:my personal answer to that would be ... because i am lazy and i don't want to do anything except eat, sleep and play video games.

seriously, man pursues these things with one idea in mind .... convenience.


But what would be the convience or purpose of having a being/intelligence that was equal or greater to you in intelligence? You don't need them to be that intelligent for simple conviences.
"For us to have self-esteem is truly an act of revolution and our revolution is long overdue."
— Margaret Cho
User avatar
Rhiannon
Joy & Oblivion
Joy & Oblivion
 
Posts: 5268
Joined: Sat Nov 23, 2002 10:10 pm

Unread postby Seven at One Stroke » Fri Jun 06, 2003 10:10 pm

Russell wrote:If the biggest risk is not taking one, then that would make not taking a risk a risk, so wouldn't you not take the original risk of not taking a risk?

What your statement says is basically this:
The biggest risk is not taking one [X] :arrow: Not taking a risk is a risk [Y] :arrow: Not taking a risk is (taking a risk) and (not taking a risk) [Z]

P|Q|P:arrow:Q|~PVQ
T|T|T|T
T|F|F|F
F|T|T|T
F|F|T|T

Disregard the truth table for a moment, and let's look at the first conditional for a moment.
Let's define all risky actions in the set A, and the universe to be all possible actions. à is the complement of A that includes all not risky actions. The first statement then says, there exists an element in the universe that is simultaneously in A and not in A. This is the negation of a tautology (PV~P), therefore a contradiction. Therefore A is always false. From the truth table, since the antecedent is false, [X] :arrow: [Y] is automatically true. Therefore, from the truth table, if [Z] is true, then the statement is true. Since [Z] is similar to [X], it says there exists an element of à that is in A, which is also a contradiction, therefore false. Therefore the entire proposition is false.

I know this seems ridiculous, but I hope this helps. The core argument is that, all risky actions are risky, all not risky actions are not risky. Since they are natural complements of each other, therefore they are always disjoint sets, which means an element of set A can never be an element of Ã. Although I do sense what you're trying to say, I don't see how you can make another type of risky actions which include not risky actions and risky actions.
And I don't know if this is technically Philosophy, but I always love Oxymorons:

"I see," said the blind man to his deaf daughter.

'We're going to see some real-live ghosts.'

Well, the blind man (not mute man) said to his deaf daughter (but it's not stated that she hears him). Since the blind man is not Tiresias, his words may not always be true, so his words don't really come into question.
Moderation in pursuit of actual work is no vice.
User avatar
Seven at One Stroke
Sei's Slave
 
Posts: 1852
Joined: Wed Oct 02, 2002 4:16 am

Unread postby Seven at One Stroke » Fri Jun 06, 2003 10:20 pm

Wild-Eyes wrote:Also, why do we pursue having artificial intelligence (particularly human-like) in the first place? There must be a deeper desire to it than having a program that will respond to our physical or mental needs (ie, more than having a smart encyclopedia or a robot that knows how to do your laundry).

I believe having a human-like AI system means that humans no longer have to participate in 'dangerous' operations that require flexibility in judgment, such as the operation of a nuclear plant, or space exploration programs. The advantage of an AI system is that it is expendable, as long as backups are present. If an AI 'dies' on a mission, possibly millions of copies can replace it with little to no time or resources lost on training inexperienced rookies, compensating the family, or getting other experts adjusted to a new environment. On the other hand, this begs the question of whether AI are intelligent beings and such operations suicide missions, and whether the practice of using robotic AI 'slaves' is moral.
Moderation in pursuit of actual work is no vice.
User avatar
Seven at One Stroke
Sei's Slave
 
Posts: 1852
Joined: Wed Oct 02, 2002 4:16 am

Unread postby Russell » Thu Jun 12, 2003 5:27 pm

Hisui wrote:
Russell wrote:If the biggest risk is not taking one, then that would make not taking a risk a risk, so wouldn't you not take the original risk of not taking a risk?

What your statement says is basically this:
The biggest risk is not taking one [X] :arrow: Not taking a risk is a risk [Y] :arrow: Not taking a risk is (taking a risk) and (not taking a risk) [Z]

P|Q|P:arrow:Q|~PVQ
T|T|T|T
T|F|F|F
F|T|T|T
F|F|T|T

Disregard the truth table for a moment, and let's look at the first conditional for a moment.
Let's define all risky actions in the set A, and the universe to be all possible actions. à is the complement of A that includes all not risky actions. The first statement then says, there exists an element in the universe that is simultaneously in A and not in A. This is the negation of a tautology (PV~P), therefore a contradiction. Therefore A is always false. From the truth table, since the antecedent is false, [X] :arrow: [Y] is automatically true. Therefore, from the truth table, if [Z] is true, then the statement is true. Since [Z] is similar to [X], it says there exists an element of à that is in A, which is also a contradiction, therefore false. Therefore the entire proposition is false.

I know this seems ridiculous, but I hope this helps. The core argument is that, all risky actions are risky, all not risky actions are not risky. Since they are natural complements of each other, therefore they are always disjoint sets, which means an element of set A can never be an element of Ã. Although I do sense what you're trying to say, I don't see how you can make another type of risky actions which include not risky actions and risky actions.


:shock: :shock: :shock:

:shock: I'm going to assume that you somehow disproved me, because that confused the crap out of me. :shock:
I suppose I should put something here, shouldn't I?
User avatar
Russell
Genuises of the ISS
Genuises of the ISS
 
Posts: 1170
Joined: Sun Jul 28, 2002 11:42 pm
Location: Talking about you behind your back.

PreviousNext

Return to Literature, Academics, and Philosophy

Who is online

Users browsing this forum: No registered users and 2 guests

Copyright © 2002–2008 Kongming’s Archives. All Rights Reserved