The intent of our 2019-2020 Creation Science High School and Undergraduate Essay Contests is to encourage awareness and interest in both the historical and observational evidences for creation by developing students’ understanding in an area of creation science research of their choosing. Papers will be grouped and judged according to their entry category: high school or college.
Papers will be grouped and judged according to the category of entry: High School Student or College (undergraduate) Student.
The writing prompts are designed to motivate responses and thought processes.
The writing prompts are intended to motivate responses and thought processes.
- Eight questions are drawn from the areas of science, math, language, and education.
- The three historical perspectives promote opportunities to focus on a historical event, person, or school of thought (and explain the impact on creation science).
Please choose one of the eight selected questions or three historical perspectives for the topic of your paper.
– Selected Questions –
Question #1: Given the introductory recommended reading on particle physics, students choosing this question will write about PET scans (i.e., positron emission tomography).
Recommended reading: Before studying the writing prompt for this question, we invite you to first read a short online article dated October 31, 2018, by physicist Vernon R. Cupps, PhD, called “Baryon Conservation and the Antimatter Mystery.” This article appears in the November 2018 issue of the science- and creation- magazine Acts & Facts, and, in our opinion, proves to be an extremely helpful resource for students interested in picking positron emission tomography (or PET scanning in medicine) for their essay topic. Here Dr. Cupps not only discusses the science and deeper theological issues surrounding the extraordinary imbalance between ordinary matter and anti-matter, but students will find that he also does this in a way that translates exceptionally well to the issue-at-hand, that is, explaining pair production and annihilation in PET scans in medicine.
In the October 19, 2017 issue of the journal Nature (see article here), scientists revealed their latest findings from an ongoing experiment that attempts to explain why there is more matter than anti-matter in the universe (an asymmetry that favors our existence). The scientists used the Anti-proton Decelerator, part of the physics program associated with the Large Hadron Collider facilities of CERN, in Geneva, Switzerland, to collect data on matter and anti-matter interactions via the Baryon Antibaryon Symmetry Experiment (otherwise known by the acronym BASE). Results showed parity (or symmetry) with matter anti-matter interactions observed previously/elsewhere.
Data was arrived at by accelerating and firing protons (the matter) into a collection of anti-protons (the anti-matter) and then measuring the associated outcomes — in an attempt to detect any theoretically remaining particles from the resultant proton anti-proton interactions. Notably, the interactions were aided by the positive electric charge carried by the proton and the negative electric charge carried by the anti-proton (i.e., opposite signs attract). Each collision produced an amazing amount of energy, but the particles annihilate each other (i.e., simply disappear). This phenomenon is referred to as an annihilation interaction.
TECHNICAL INSERT: Protons, together with neutrons, are subatomic particles found in the nuclei of atoms (and, collectively, they are referred to as nucleons). Each proton, as well as each neutron, falls into a general classification group called baryons. Under an even more generalized classification, protons and neutrons are part of the hadron family. Both protons and neutrons are considered large particles, and thus, we start to develop meaning for CERN’s Large Hadron Collider, which is a type of synchrotron — a specifically designed particle accelerator. Furthermore, protons and neutrons, together with a third notable particle, the electron (the subatomic particles that orbit the nucleons), are considered ordinary particles, and importantly, these three particles constitute the building blocks of ordinary matter (as well as the backbone of the Standard Model of particle physics). Thus, when contrasted with ordinary matter, we define anti-matter as material composed of anti-particles (which are equal in mass but opposite in sign of the electric charge). For example, just as there is one proton and one electron in a hydrogen atom, there is one anti-proton and one anti-electron in an anti-hydrogen atom.
The report in Nature was based on a very precise and accurate measuring technique — the most sophisticated technique applied to date. On this point, we emphasize that these findings agreed with the body of evidence concerning observational matter/anti-matter symmetry. Together this evidence is at odds with the fundamental imbalance weighted toward matter that we readily and rather easily experience around us, and this carries two important implications. First, the resultant matter/anti-matter symmetry clearly fails to answer why there is such an abundance of ordinary matter existing in the universe (and how and why we got here under the Big Bang theory of the cosmos). And second, garnered by the empirical evidence, the overall findings uphold the Standard Model of particle physics.
Although we risk getting lost in the details of BASE, and quickly rendering it as a mere “thought experiment” bordering on the abstract, a more pragmatic understanding of matter and anti-matter (as well as of annihilation interactions) is attainable. For this we look at medical physics, and specifically, medical imaging via positron emission tomography (PET).
Positron Emission Tomography (or PET Scans in Medicine)
The technique of PET imaging
PET scan imaging is a beneficial medical technique routinely used, albeit selectively, to diagnose, stage, or rule out physiological abnormalities at the molecular level for many people/patients. In fact, we point out that PET imaging, as a technique, is only made possible by exploiting the anti-matter physics common to the electron (i.e., the anti-matter counterpart of the electron called the positron). Thus, we are of the opinion that a proper framework with which to construct an understanding of particle physics occurs through a well-developed appreciation for the technique employed in PET scanning. This refers chiefly to pair production as well as annihilation interactions.
Schematic of pair production and annihilation interaction
For the contest, two considerations are raised concerning anti-electrons (or positrons) and PET scan imaging:
Option 1: Please demonstrate a basic understanding of the Standard Model of particle physics by describing the process of PET scanning. (Note: pay close attention to the differences between annihilation interactions and pair production.)
Remark: A concise review of the PET technique is available in a June 2003 editorial by Abi Berger called “How Does It Work? Positron Emission Tomography,” which appears in the medical journal BMJ.
Option 2: Not only discuss the Standard Model of particle physics as well as the process of PET scanning (including discussion on annihilation interactions and pair production), but please center your essay on what the ramifications are for the prevalence of ordinary matter in the universe (via the so-called universal asymmetry that favors ordinary matter, as evidenced through an account of biblical creation).
Question #2: The background for this question starts with a physical description of the atom and then elaborates on several foundational concepts in the field of chemistry. The question itself is concerned with periodic trends, writing and naming chemical compounds, and biological interactions.
The discovery of the electron by J.J. Thomson in 1897 marks the first foray into the realm of subatomic particles. He arrived at this discovery via his cathode ray tube experiment. Today, we understand that the electron, while being an elementary particle, is anything but simple. In fact, the effort that goes into learning about electron configuration (i.e., the placement of electrons around an atom’s nucleus) is a great way to introduce the concept of quantum number. In fact, the art and science surrounding electron quantum numbers provides a systematic approach to probing the innate characteristics of any fundamental particle. What is more, because the word characteristic equally depicts or sometimes better depicts the associated meaning, we interchange the terms quantum number and quantum characteristic.
Electron Configuration of the Atom
There are four quantum characteristics associated with the electron, as follows:
Shell—the term shell refers to the placement of an electron in orbit (or in an electron orbital) around the nucleus. Either we may number the shells n = 1, 2, 3, …, or we may alphabetize them, such that 1 = K, 2 = L, 3 = M, and so on. Convention dictates use of the letter n to denote the shell number (n for number). As n increases, not only does the distance from the nucleus also increase, but so does too the amount of energy needed by electrons to fill higher shells (i.e., those shells of greater distance from the nucleus). We say that each shell has its own inherent energy level, and that each successive shell’s energy level is higher than those below it.
Subshell—the term subshell refers to orbital shape per energy levels associated with the primary shells. Here by convention we use the letter ℓ to denote subshell. In their own unique fashion, we mark subshells by layering them starting with the first shell and ending with last via the formula: n – 1.
So, with one shell (such that n = 1), we get ℓ = 0, which marks the s-orbital subshell, which consists of a spherical electron cloud.
Figure: The s-orbital subshell, ℓ = 0 and mℓ = 0.
For two shells (such that n = 2), we get ℓ = 0 or 1, with ℓ = 1 marking a p-orbital subshell. The p-orbital subshell is composed of three paired oblong lobes aligned side by side, up and down, or front and back. The electron cloud probability pattern for this subshell is dumbbell shaped.
Figure: The p-orbital subshells, ℓ = 1 and mℓ = –1, 0, 1.
With three shells (such that n = 3), we get ℓ = 0, 1, or 2, with ℓ = 2 marking a d-orbital subshell. The electron cloud probability pattern for d-orbitals takes the shape of a clover leaf.
Figure: The d-orbital subshells, ℓ = 2 and mℓ = –2, –1, 0, 1, 2.
At the next highest energy level, we have the fourth shell (such that n = 4), thus we get ℓ = 0, 1, 2, or 3, with ℓ = 3 marking the f-orbital subshell.
Figure: The f-orbitals, ℓ = 3 and mℓ = –3, –2, –1, 0, 1, 2, and 3.
When designating each subshell’s electron cloud orientation per energy-level, we use the magnetic quantum number, which is commonly abbreviated mℓ. The values associated with this characteristic identify specific orbitals within the particular arrangements of orbital subshells. We identify subshells based on a simple formula: expressed as minus ℓ to plus ℓ. So, when mℓ = 0, we end up just with the sphere-shaped s-orbital (see above). With ℓ = 1, we get mℓ = –1, 0, 1. Here each valuation is marking one of the three possible orientations of p-orbital subshells (i.e., top/bottom, front/back, and right/left) (as depicted above). When ℓ = 2, we get mℓ = –2, –1, 0, 1, 2, which are the five possible orientations for energy levels associated with d-orbital subshells (see above). At the next highest energy level, even more placement options are available. Here we see that when ℓ = 3, we have seven choices, or mℓ = –3, –2, –1, 0, 1, 2, and 3 for the f-orbitals (as depicted above).
The spin quantum number for the electron is either –1/2 or 1/2, and is commonly abbreviated ms. Furthermore, we say that –1/2 is spin down and that 1/2 is spin up. We need to resist the urge to associate the idea of spin with physical spin about an axis (like that of a planet). However, at time, such an interpretation is convenient. Notably, spin must be considered an intrinsic, relativistic property of the electron. What is more, the Pauli exclusion principle tells us that any electron in orbit around an atomic nucleus can be identified specifically by these four quantum numbers (i.e., no two quantum characteristics are alike for any electrons associated with the same atom, which means that no two electron have the same four quantum numbers). Practically, in fact, this exclusion principle states that no more than two electrons can share any orbital subshell at the same time. So, when two electrons do exist in an orbital, they are either spin up or spin down. Once again, however, it is worth emphasizing that no two electrons have the same four quantum numbers: n, ℓ, mℓ, and ms.
Electron Configuration of the Atom (continued)
Four guiding principles for filling electron clouds
Four guiding principles provide strategies for how electrons fill the electron clouds of orbitals around a nucleus. In no particular order, the first of these strategies is the Pauli exclusion principle. Although this principle was emphasized above, it tells us that no two electrons in an atom have the same four quantum numbers. Next, we consider the influence of orbital energy levels. Simply stated, orbitals of lowest energy are filled first. This strategy is called the Aufbau principle. However, when multiple orbital subshells exist for the same energy level (and keeping in mind that only two electrons can fill one electron cloud associated with an orbital subshell at a time), then the third strategy, known as Hund’s rule, dictates that one electron is placed into each subshell before doubling up. For example, for the three bilateral lobes of the p-orbital, each bilateral lobe (dumbbell orbital clouds) must be filled with one electron prior to placing a second electron in any of the three lobes (please see the schematic of the p-orbitals below).
Figure: Each of the subshells of the p-orbitals (as depicted here: left image, center image, and right image) must be filled with one electron prior to placement of a second electron. A guiding principle known as Hund’s rule dictates that placement would first fill the bilateral lobes with, for example, all up-spin electrons prior to adding a down-spin electron.
Finally, the Madelung rule provides a concrete way for describing how the electron filling sequence is ultimately carried out.
Electron Configuration of the Atom (continued)
Bringing everything together about electron configuration in a single picture
Perhaps the best way to look at the s-shells, p-shells, d-shells, and f-shells is to think about how they appear on the periodic table of elements. Here the following depiction does just that.
Figure: Block-wise schematic for electron configuration. (Please note that the noble gas helium was moved to its proper location within the second group of the table.)
Several examples to help drive home the point
What follows are several examples to help make electron configuration tangible.
H: 1s1 Na: 1s22s22p63s1 Sc: [Ar:] 3d14s2 Ga: [Ar:] 3d104s24p1
He: 1s2 Mg: 1s22s22p63s2 Ti: [Ar:] 3d24s2 Ge: [Ar:] 3d104s24p2
Li: 1s22s1 Al: 1s22s22p63s23p1 V: [Ar:] 3d34s2 As: [Ar:] 3d104s24p3
Be: 1s22s2 Si: 1s22s22p63s23p2 Cr: [Ar:] 3d44s2 Se: [Ar:] 3d104s24p4
B: 1s22s22p1 P: 1s22s22p63s23p3 Mn: [Ar:] 3d54s2 Br: [Ar:] 3d104s24p5
C: 1s22s22p2 S: 1s22s22p63s23p4 Fe: [Ar:] 3d64s2 Kr: [Ar:] 3d104s24p6
N: 1s22s22p3 Cl: 1s22s22p63s23p5 Co: [Ar:] 3d74s2 Rb: [Kr:] 5s1
O: 1s22s22p4 Ar: 1s22s22p63s23p6 Ni: [Ar:] 3d84s2 Sr: [Kr:] 5s2
F: 1s22s22p5 K: [Ar:] 4s1 Cu: [Ar:] 3d94s2 Y: [Kr:] 5s24d1
Ne: 1s22s22p6 Ca: [Ar:] 4s2 Zn: [Ar:] 3d104s2 Zr: [Kr:] 5s24d2
Nucleus of the Atom
Radioactivity and electromagnetism’s effects on cathode rays
In the experiments by J.J. Thomson, a cathode ray tube was used to make beam of electrons (cathode rays) by heating metal plates inside a cathode tube. These rays were determined to be made up of tiny particles of negative charge (i.e., electrons) that came out of the metal. The negative charge was demonstrated through electricity and magnetism, which could be used to bend the beam. It was later shown by others that radioactive particles could also bend the rays. For example, Ernest Rutherford showed this phenomenon using both beta and alpha radiation. On the other hand, gamma-radiation as well as x-radiation could not be used to accomplish this task, since both of these forms or radiation are electrostatically neutral. However, it is important to talk about x-rays. We will find that there are many similarities in how x-rays are produced and interact with matter to the way in which Rutherford’s insights into radioactivity allowed him to probe further into the realm of subatomic particles. Therefore, as a means to make all of these things tangible we first review how x-rays are made to set the stage for discoveries involving the atomic nucleus.
The process of x-ray production
When the x-ray tube is activated, electrons are “boiled off” from the wire element (i.e., a thin filament of tungsten) to form an electron cloud. The wire element is strategically located opposite from the target anode as part of a built-in concavity (of which the rim is slightly more negatively charged to concentrate the electrons in the cloud) on the cathode. The number of electrons boiled off is directly related to the tube current. Occurring nearly simultaneously with tube activation, the electrons in the electron cloud are forcefully attracted to the target anode due to the potential difference between the cathode and anode. The rate of speed and the efficiency of attraction are dependent on the potential difference across the tube.
Figure: X-ray tube schematic. When high-speed (incident) electrons strike the target, less than 1% of the kinetic energy is converted into x-rays, with the remaining kinetic energy (99% or greater) converted into heat. Key: cathode heating coil (C) and heater voltage (Uh); target anode (A) made of tungsten and anode voltage (Ua); emitted x-rays (X); and cooling system denoted by water in (Win) and water out (Wout).
More specifically, two x-ray producing processes occur. First, we consider interactions of incident electrons with the nucleus of a tungsten atom, in which the incident electron slows down to change direction (called bremsstrahlung, or “braking radiation”). Bremsstrahlung radiation is emitted from zero to the maximum energy (operating kV). Second, we consider collisions of incident electrons with orbital shell electrons of the tungsten atom. The collision knocks the electron out of orbit (producing characteristic radiation). Characteristic radiation is the term used to reference the fact that the x-ray energy produced is related to the binding energy between the outer shell electron and the nucleus of the target atom, and is always the same for a specific target atom.
The Nucleus and the Proton
Rutherford and the inverse square law (i.e., Coulomb’s law)
The second foray into subatomic structure occurred in 1911, when Ernest Rutherford deduced the proton (or perhaps more accurately, the nucleus) via his famous gold leaf experiment. In this experiment, Rutherford fired alpha particles into a thin layer of gold leaf (think something like “gold tin foil”). What he expected was that most, if not all, of the alpha particles would transmit straight though the foil. To his astonishment, however, what he observed was that some alpha particles scattered elsewhere. Rutherford concluded that, structurally, atoms at their core contained a concentrated mass of positive charge (including what we have subsequently today called the proton).
TECHNICAL INSERT: The mathematical law that governs how strongly two charged particles will repulse or attract each other is called Coulomb’s law. We write it, as follows:
F = k × ((q1 × q2) / d2).
Here F is the magnitude of the force of repulsion (or attraction), k refers to an electrostatic constant, q1 is the electric charge of one particle, q2 is the electric charge of the other particle, and d2 refers to the square of the distance between each of the particles. We say that the repulsive (or attractive) force is inversely proportional to the square of the distance. Because both the alpha particles and the protons contained in the nuclei of the gold atoms are positively charged (and because like charges repel), Rutherford deduced that the strength – or magnitude – of repulsion solely depended on the proximities between the two particle types. In other words, the closer an alpha particle came to a gold atom’s nucleus, the stronger it was deflected. Conversely, alpha particles that transmitted through the gold leaf were not influenced in any way by gold nuclei. Incidentally, it is worth mentioning that other forms of inverse square laws exist for various physical phenomena. For example, Newton’s law of universal gravitation is an inverse square law that explains why gravitational attraction between two celestial bodies fades away so quickly as their distance from each other increases, and vice versa. Here the electrostatic constant k is replaced by a gravitational constant (G), and q1 and q2 are replaced by m1 and m2, which are the masses of the first and second objects of interest, respectively. Similarly, an inverse square law centered around the phenomenon of a point source of light helped Newton (and helps us) explain the manner in which rays of light spread out. What is more, x-ray intensity is inversely related to distance squared. If you double your distance from a radiation source, then your exposure to the intensity of the source drops fourfold. And, likewise, if you cut your distance to the source by half, then your exposure to its intensity increases fourfold. In fact, the inverse square law places the idea of distance from a point source as the most important and least expensive component (it is the easiest to implement) in a well-organized radiation protection program for healthcare and medical personnel. Overall, time, distance, and shielding together form the three cardinal principles in radiation protection (with the amount of time of exposure considered the most important component for patients).
Bohr model of the atom
The work by Ernest Rutherford, coupled with the earlier work identifying the electron by J.J. Thomson, led to the Bohr model of the atom (named after Niels Bohr). In this model, electrons are viewed as orbiting the positively charged nucleus of an atom, in much the same way as the planets orbit the sun.
Biological composition and the proton
A calcium atom contains 20 protons, and therefore, its atomic number is 20. The effective atomic number of bony tissues (i.e., material with a large content of calcium atoms) is 13.8, which is nearly double that of soft tissues. However, as one might guess, soft tissues contain much lighter atoms, including hydrogen (1 proton), carbon (6 protons), and oxygen (8 protons). Therefore, when relative percentages are accounted for, an effective atomic number of 7.4 is yielded from soft tissues.
What follows are several examples of differences in the atomic numbers and densities of matter found in the makeup of the human body:
Air—the effective atomic number of air is 7.78
Fat—the effective atomic number of fat is 6.46
Soft tissue—the effective atomic number of soft tissues is 7.4
Water—the effective atomic number of water is 7.51
Muscle—the effective atomic number of muscle is 7.64
Spongy bone—the effective atomic number of spongy bone is 12.3
Compact bone—the effective atomic number of compact bone is 13.8
Calcium—the atomic number of calcium is 20.0
The Nucleus and the Neutron
The third and final foray into subatomic structure came from James Chadwick and his 1932 discovery of the neutron.
Figure: Four hydrogen atoms to one helium atom.
One clue that something else probably existed in the nucleus of an atom came from the realization that it takes four atoms of hydrogen to balance one atom of helium. Today, we know that hydrogen contains just one proton and no neutrons; however, each helium atom contains two protons and two neutrons. Neutrons help keep the centralized collection of positively charged protons in the atomic nucleus from flying apart. In fact, together protons and neutrons are referred to as nucleons. What is more, except for hydrogen, every atom contains one neutron for each of its protons. There are cases, however, that the number of neutrons and protons may be off-balance. If this is the case, then the atom is called an isotope. In fact, isotopes commonly undergo radioactive decay. Because a neutron has a little more mass than a proton, excess neutrons will decay into protons to help stabilize the atom. This action in this way is a ramification of the law of conservation of matter and the law of conservation of energy
Atomic Structure and Quantization
From Planck to Einstein…to Schrödinger
Ultraviolet catastrophe and the photoelectric effect
Max Planck solved the ultraviolet catastrophe associated with blackbody radiation and inasmuch discovered that energy is quantized. Shortly thereafter, the idea of quantization was extended to not only energy but also light by none other than Albert Einstein. He discovered that light could free electrons from a metal plate, if the frequency of the light was high enough. The freed electron was dubbed a photoelectron. The ramification of this experiment meant that light itself was a particle (and not just a wave). The particle of light was called a photon. Therefore, photons are merely quantized bundles or packets of light energy. Later, Niels Bohr talked about quantization as a mechanism to explain the movement of electrons from inner to outer shells (and vice versa) depending on the – discrete – amount of energy that a particular atom absorbed (or released).
Waves and particles imply particles and waves
The duality concept of light, that is, the ability of light to exhibit wave-like and particle-like properties was extended by Louis de Broglie. In what is referred to as the double-slit experiment, de Broglie showed a diffraction pattern for electrons directed at a partition containing a double slit similar to that for light. This resulted in the realization that electrons acted as if they were waves. The broader implications of the double-slit experiment meant that not only are the natures of light and electrons subject to dual characteristics, but any particle – actually any particle regardless of size – is equally subject to wave-like and particle-like properties.
Schrödinger model of the atom
Electrons behave both in a particle-like fashion and a wave-like fashion. The wave-like property of electrons (coupled with idea that the electron is quantized) allowed Niels Bohr to think of them as standing waves. However, Max Born, furthered this thought by saying that the wave function of an electrons is a probability function. All of these discoveries, from the riddle associated with the ultraviolet catastrophe resolved by Max Planck to the probability function associated with which shell an electron will inhabit, were then extended via the Heisenberg uncertainty principle (developed by Werner Heisenberg). Nevertheless, for the subject of chemistry, these discoveries meant that a more realistic model of the atom would be a model that places electrons in orbitals (and not orbits), as was introduced by Erwin Schrödinger and as came to be known as the Schrödinger model of the atom. In fact, this is the model that we have been using throughout our discussion.
Foundational Concepts in Chemistry
As hinted at by the word valence, which refers to the combining power of an element, valence electrons are of extreme importance in the formation of chemical bonds. These electrons belong to the outer-most shell of an atom. What is more, similar to how we employ features of the periodic table to help us better understand the way in which orbitals are filled, we can use the table’s columns (also known as groups) to determine how many valence electrons a neutral atom possesses. For example, hydrogen has one valence electron, boron has three, carbon has four, nitrogen has five, and so on (please see the schematic below).
Figure: Schematic showing the number of valence electrons (circled) per column (also known as group) (indicated by the arrows). Notice that the valence shells for both helium and the group eighteen elements contain the maximum number of allowed electrons (two electrons for helium and eight electrons each in the outer-most shells for neon, argon, krypton, xenon, radon, and oganesson). Valence shell fullness means that elements helium through radon (i.e., those elements occurring naturally; oganesson is not naturally occurring) are inert, which means they are very stable. Categorically, helium and the group eighteen elements belong to the noble gas group. In fact, all group eighteen elements – apart from oganesson – are gases at room temperature. As for oganesson, it is predicted to be a solid at room temperature due to relativistic effects. (Please note that the noble gas helium was moved to its proper location within the second group of the table.)
Atom vs. Element vs. Molecule vs. Compound
What is the difference between an atom and an element? And what is the difference between a molecule and a compound?
When we are looking at a substance in which every part is the same throughout, we are looking at an element. Elements are pure substances. For example, diamonds are made of pure carbon; however, the smallest part of a diamond (i.e., the diamond element) is a singular carbon atom. In carrying out our example a bit further, we would notice that a singular atom of carbon in a sample of diamond has identical physical and chemical properties of that diamond element. What is more, another way in which we can define an element is by saying they are substances that cannot be broken down into other substances (that is, other elements). On the other hand, compounds are examples of substances composed of altogether different elements. This differential composition is a defining characteristic of compounds. A compound, then, is not a pure substance. Furthermore, the smallest part of a compound is called a molecule. By definition, therefore, we say that electrons from molecular samples share two or more different atomic nuclei.
What is a diatomic element?
Of the one hundred and eighteen elements found on the periodic table, seven of them exist as diatomic pairs. These elements are properly referred to as diatomic elements, and they are listed here, as follows: H2, N2, O2, F2, Cl2, Br2, and I2. What is more, each diatomic element is a gas at room temperature; therefore, it is not uncommon to hear one of these elements referred to as a diatomic gas. In fact, we need not search for a more perfected example that shows how to differentiate between a diatomic element and a non-diatomic element than to compare two different gas-filled balloons. In such an example as this, one balloon is filled with oxygen and is diatomic by nature; the other balloon is filled with helium and, therefore, is non-diatomic by nature.
The Periodic Table! Exactly!
How did the periodic table get its name? What are some of the table’s common groupings and trends?
The renowned periodic table of the elements is broken down into rows and columns. Each row is called a period and each column is called a group. The term period is derived from the fact that as we move from left to right in each row, the elements undergo repeatable patterns (i.e., a certain periodicity exists). When the table was first conceived not every element was known; however, the periodicity made it possible to predict the physical and chemical properties of unknown elements. The table was finalized in 2016 when elements 113, 115, 117, and 118 of the seventh period were officially named. The 113th element is now known as nihonium (symbol: Nh); the 115th element is known as moscovium (symbol: Mc); the 117th element is tennessine (symbol: Ts); and the 118th element is known as osganesson (symbol: Os).
Figure: The periodic table of the elements.
Several groups have also been given specific names, as follows: alkali metals (Group1); alkaline earth metals (Group 2); pnictogens (Group15); chalcogens (Group 16); halogens (Group 17); and noble gases (Group18).
Four fundamental trends exist across the periodic table. These trends are atomic radius, ionization energy, electron affinity, and electronegativity.
Figure: The trends of the trends. Apart from electronegativity, this schematic shows the trends for atomic radius, ionization energy, and electron affinity.
All four trends are described here, as follows:
Atomic radius—the radius of an atom is determined by the number of shells of electrons (including the influence of the so-called shielding effect on non-valence electrons) and by the number of protons found in the nucleus.
Ionic radius—the trend associated with ionic radius is similar to that for atomic radius but with emphasis on electrons placed in or taken out of outer-most shells.
Ionization energy—the amount of energy that is needed to remove an electron from an atom’s outer-most shell is called ionization energy. Because the electromagnetic attraction between electrons and protons drops off quickly as distance from the nucleus increases, the amount of energy needed to remove an outer-most electron likewise decreases quickly as the distance increases.
Electron affinity—tells us how much an atom wants to gain an electron.
Electronegativity—describes the ability of an atom to hold electrons tightly.
What is Electronegativity?
A covalent bond is yielded when one or more valance electrons are equally shared by two atoms. However, if one of the atoms monopolizes an electron, this means that that atom’s electronegativity is higher. Electronegativity simply refers to the likelihood of an atom participating in a chemical bond to exert a dominate influence over valence electrons. In fact, when unequal sharing of valence electrons occurs, then this type of bond is called a polar covalent bond, that is, we observe the consequences of a net partial polarization effect between the bonded atoms. If, however, the dominance of an atom participating in a bond permits that atom to steal one or more valence electrons away from another atom, then we say the bond is ionic (i.e., an ionic bond). Conversely, the ability of a singular atom to attract and keep its own electrons is termed electron affinity.
A classic example of a polarized covalent bond is the bond between an oxygen atom and two hydrogen atoms (i.e., a water molecule, or H2O). In the case of H2O, the space surrounding the oxygen side of the molecule is partially negative, and the space surrounding the hydrogen side (each of the two hydrogen “tails”) is partially positive.
More Foundational Concepts in Chemistry
What are Chemical Bonds?
How do ionic and covalent bonds differ?
With an ionic bond one atom keeps the valence electron of another atom. The atom most inclined to take the valence electron (usually one or two of them) is the one in which it is easiest to make its out-shell complete, and this atom becomes negative. In many cases, the atom that loses the electron will subsequently also have a stable electron configuration, and this atom becomes positive. The resultant bond has ionic characteristics.
With covalent bonds, the situation is similar, except that an atom’s pull for the valence electron (i.e., to take the electron) is not as strong. Subsequently, both atoms will share the electron(s).
What is the difference between a non-polar covalent bond and a polar covalent bond?
A great way to look at the difference between non-polar and polar covalent bonds is to think whether or not one atom is monopolizing (or that it hogs) the valence electron(s). As previously mentioned, water molecules are a good example of a polar covalent bond. As for non-polar covalent bonds, the attraction that holds diatomic atoms together is a good example.
What are hydrogen bonds and Van der Waals interactions?
With so-called hydrogen bonds, there are many hydrogen atom-based bonds formed within molecules. The bonds are generally weaker but there may be more of them. These types of bonds are important in biology.
Another important bond in biology is a type of bond known as Van der Waals interactions, these bonds may be found when a molecule “docks” at a cell’s receptor.
For the contest:
Part 1: In your response, describe how you might try to learn the periodic trends. More specifically stated, describe the way you remember the directional tendency that each trend exhibits across the columns and rows of the periodic table (as was shown in the background content above, i.e., the trends of the trends). In addition, given that Coulomb’s law quantifies the magnitude of attraction or repulsion between two charged particles, explain which periodic trend is the most dependent on this law and why?
Part 2: Describe an acceptable method for writing ionic compounds and for naming covalent compounds. With respect to the latter, differentiate between glucose and fluorodeoxyglucose. Also, briefly describe the area of medicine and the technique involved, for which fluorodeoxyglucose is routinely used (Hint: see question #1 above).
Part 3: The body operates by electrical and chemical interactions. Within the realm of biology, why are covalent bonds, hydrogen bonds, and Van der Waals interactions important?
Part 4: Explain why diatomic gases are not noble gases. Then, identify the heaviest, non-radioactive element and explain its role in medical imaging. Finally, pick any four elements and compare their physical and chemical properties.
Bonus material: Buried in the writing prompt for this question, a comment was made about pure carbon serving as the underlying structure of diamond. However, another famous substance is also made by pure carbon. Therefore, please identify this substance, as well as some of its uses. In addition, please explain what carbon-14 is.
Question #3: A biology-based question that deals with the heart. This question helps prepare students for career paths in medicine as well as in other healthcare-related fields.
Physicians practicing regenerative medicine improve the structure and function of organs and tissues by affecting the biological status of tissues at the cellular level. The purpose of such intervention is to prevent deficits that ultimately lead to functional impairments at the organ and whole-body levels. In fact, the goals in regenerative medicine, biomaterials science, and tissue engineering are inter-related: to minimize loss and improve structure and function through interventions compatible with a patient’s individual physiology. As a matter of fact, a well-developed understanding of anatomy and physiology is a key element to successful outcomes within these inter-related and intra-dependent fields.
The human heart is a uniquely designed, archetypal, and even seminally prototypical organ. What is more, when seeking novel tissue engineering strategies for regenerative cardiology, physician researchers and material scientists are finding it increasingly useful to mimic the heart’s architectural and physiological properties. The prevailing techniques are those that utilize the plasticity of cells, including adult stem cell therapy. For example, in what offers promising treatment potential for heart failure, Noor et al. (2019) combined the electrical, mechanical, and biological building blocks of the myocardium – derived through therapeutic manipulation of adult stem cells – to produce the world’s first 3D printed heart.
For students who choose this question:
Part 1: Creationists identify humankind – including the human heart – as God’s special creation. For the contest, because the cell-tissue interface in the healthy myocardium comprises a structurally-complex yet distinct blend of biological and electrochemical properties which affect patient functional capacity and physical performance, students must explain the heart’s electrical system (as shown below).
Figure: Conduction system of the heart.
Part 2: Creationism offers an exacting argument for a prototypical and archetypal design of the heart. Whereas Psalm 73:26, Proverbs 4:23, and Genesis 1 and 2 underpin this claim, mathematical evidence may be obtained from fractal analysis of heart rate (as well as myocardial microstructure). With respect to heart rate and rhythm analysis, such information is replete in the literature. For instance, valuable insight into the overall health of an embryo is available as early as the sixth week of pregnancy based on spectral-Doppler ultrasound of embryonic heart rate variability (Shenker et al., 1986; Doubilet and Benson, 1995). Therefore, students should also include in their paper a description of several key events in the historical timeline probing heart rate variability via reviewing “Heart Rate Variability – A Historical Perspective” by George E. Billman.
Remark: In Answers Research Journal, Sled (2018) promotes an integrative, biblical approach to the study of anatomy and physiology. In addition, please see Purdom (2007) for a non-technical article discussing the pitfalls – and merits – of cell-based therapies. Finally, Wininger (2020) brings this topic together by offering several practical insights from the fields of biomimicry and bioengineering in cardiovascular care.
Question #4: From the many discoveries made in medicine (e.g., there is no doubt we have learned much about our body’s structure and function) to the world around us (e.g. the field of paleontology), the microscope has played an instrumental role in science. This question asks students to first dive into the history of the microscope, and then take a close look at some of the ground-breaking investigations currently being performed in the area of dinosaur soft tissue research.
Part 1: The medical journal American Journal of Pathology was previously known as the Journal of Medical Research, and before that it was known as the Journal of the Boston Society of Medical Sciences. For students who choose this topic, please follow the links from the journal archives (click here) maintained by the U.S. National Library of Medicine in order to search out issue 6 of volume 4 of the Journal of the Boston Society of Medical Sciences. In that issue, locate the short article “The Development of the Microscope” by Harold C. Ernst. This well laid-out article will provide you with a brief history of the microscope. Please write a concise summary of this article.
Figure: Parts schematic for a compound light microscope.
Part 2: The Dinosaur Soft Tissue Research Institute (DSTRI) actively pursues questions surrounding the distinct and irrefutable preservation of dinosaur soft tissue. They carry out original research on the various kinds of dinosaur cell types found today. This activity includes a detailed mapping of the locations around the globe where such soft tissues have been recovered. For your paper, please review the two benchmark articles on this topic:
“Preservation of Triceratops horridus Tissue Cells from the Hell Creek Formation, MT” by Mark Armitage, which appeared in Microscopy Today, and
“Soft Sheets of Fibrillar Bone from a Fossil of the Supraorbital Horn of the Dinosaur Triceratops horridus” by Mark Armitage and Kevin Anderson, which appeared in Acta Histochemica.
Please comment on the impact of this research. In addition, please systematically review and comment on several of the available videos from DSTRI, in particular the video clips that reveal never before seen footage of Nanotyrannus soft tissue, as well as the video featuring James Solliday, the senior microscopist at DSTRI, which concerns the various types of soft tissues being found in dinosaur remains.
Remark: Please click here for DSTRI’s articles-and-updates webpage to retrieve the two articles and to find the links to the video clips.
Question #5: This question deals with the fossil record, as well as geology.
This three-part question collectively addresses the structure and sophistication of trilobites (and the period known as the Cambrian explosion), the aftermath of the Mount St. Helens eruption, and the formation of stalactites and stalagmites.
First, students should discuss trilobites and give your opinion on whether or not these creatures were primitive or complex (and why). Also, please discuss what is meant by the term Cambrian explosion, and if this might be explained by a global flood.
Next, students should summarize the article, “Mount St. Helens and Catastrophism” by Steven A. Austin, PhD, who presented this material at the First International Conference on Creationism, Pittsburgh, Pennsylvania, August 4 – 9, 1986.
Finally, much debate has taken place between creationists and evolutionists concerning the amount of time required for the formation of stalactites and stalagmites, with evolutionary claims extending into the countless thousands of years. With this in mind, students should discuss the formation and growth of stalactites and stalagmites upon review of the 1932 article published in the Ohio Journal of Science, “An Unusual Occurrence of Stalactites and Stalagmites” by Karl Ver Steeg of the College of Wooster, Wooster, Ohio. And, as part of your response, please make sure you describe the factors that influence stalactite and stalagmite growth rates. Please cite sources that support your argument.
Science deals with observations of present states and processes, and can only discuss the prehistoric past.
– Jeremy L. Walter, PhD, Mechanical Engineering
Remark: Students who lean heavily towards science and mathematics may weight their papers accordingly, whereas students who prefer a robust exploration of history may also weight their papers accordingly. Regardless, responses should be balanced with considerations given to origins. Also, with respect to stalactites and stalagmites, a summary that highlights any differences between stalactites exposed to the elements (like those discussed in the aforementioned 1932 article and also pictured below) and stalactites formed underground (like those found within the Ohio Caverns, similarly pictured below) is also encouraged. In our view, such a comparison of contrasted settings shows intriguing results with respect to growth rates.
Figure: The top left image shows the face page to Karl Ver Steeg’s 1932 article “An Unusual Occurrence of Stalactites and Stalagmites.” The top right image is a companion picture from Ver Steeg’s article, in which we see stalactites made of calcium hydroxide, known by its chemical nomenclature as Ca(OH)2, that formed in ambient air conditions under a railroad bridge above Bever Street in Wooster, Ohio. The bottom center image is a recent snapshot of underground stalactites and stalagmites made of calcite, or CaCO3, presently seen at the Ohio Caverns in West Liberty, Ohio.
Question #6: This question touches on climatology and oceanography with respect to a biblical creation worldview.
This topic considers the relation of certain design elements effective today for the heating and cooling of the planet. As such, students who choose this topic discuss patterns between climate and oceanic-derived storms, in other words, greenhouse heating and the hurricane as a heat engine, respectively.
For details, please download: “On the Study of Climate and Oceanography.”
Recommended reading: The Creation Science Fellowship recently held its Eighth International Conference on Creationism in Pittsburgh, Pennsylvania from July 29th to August 1st, 2018. During the conference, Dr. Steven Gollmer, professor of physics at Cedarville University in Ohio, updated conference attendees on the status of his efforts in developing a global-scale computational model for post-Flood Ice Age precipitation. Because Dr. Gollmer is using software developed by NASA, a completed climate model of this sort would be recognized and welcomed by many climate scientists and graduate students as a benchmark model. As such, secular and creation scientists who specialize in local weather patterns could then use the model to customize their own locality-based models to gain a clearer picture of localized post-Flood Ice Age effects. Apart from the obvious benefit of obtaining a benchmark model within the field of climatology, the intrinsic features of the model would be of added value within the creation literature to help archeologists, for example, better understand the post-Flood movements of humankind around the globe.
In light of his busy schedule at the conference, we at Ashland Creation Colloquium were delighted that Dr. Gollmer sat down for an interview. It is our hope that students will be encouraged by what Dr. Gollmer had to say with respect to his worldview, as well as motivated through his work at Cedarville University concerning the study of origins, specifically post-Flood Ice Age climate modeling. To read the full transcript, please click “Interviewing Steven Gollmer, PhD.”
Remark: To carry out his work on Ice Age precipitation patterns following the Global Flood, Dr. Gollmer is using state-of-the-art computational software for climate modeling developed by scientists at NASA’s Goddard Institute for Space Studies (GISS). This software is called GISS Model E2. In addition, Dr. Gollmer is operating the project using the most current version of GISS Model E2 — known as AR5. (Please click here to learn more about the GISS global climate modeling project.)
Question #7: This question deals with knowledge, research, and what science is.
In response to a question that ultimately dealt with mankind’s extra-biblical search for knowledge, C.S. Lewis (1970) wrote in God in the Dock:
If the solar system was brought about by an accidental collision, then the appearance of organic life on this planet was also an accident, and the whole evolution of man was an accident too. If so, then all our present thoughts are mere accidents — the accidental byproduct of the movement of atoms. And this holds for the thoughts of materialists and astronomers as well as for anyone else’s. But if their thoughts — i.e., of Materialism and Astronomy — are merely accidental byproducts, why should we believe them to be true? I see no reason for believing that one accident should be able to give me a correct account of all the other accidents. It’s like expecting that the accidental shape taken by the splash when you upset a milk-jug should give you a correct account of how the jug was made and why it was upset. (pp. 52-53)
Science can only deal with things that are measurable. Human knowledge is limited in its scope by ignorance (we don’t have all the facts), error (we misinterpret the facts), and bias (we distort the facts). However, the Bible tells us:
Thus says the LORD: “Let not the wise man boast in his wisdom, let not the mighty man boast in his might, let not the rich man boast in his riches, but let him who boasts boast in this, that he understands and knows me, that I am the LORD.” (Jeremiah 9:23,24a, English Standard Version)
Given the right perspective, a search for scientific knowledge can be a noble endeavor, and at the same time, a monumental and almost precarious task. One historical example was the concerted efforts by those researchers who tackled the Human Genome Project (completed in 2003). And yet, if we look for a somewhat similar but drastically different example currently underway, we may consider the Physiome Project whereby various scientific, engineering, and mathematical disciplines are coming together in an effort to shed light on how each and every component in the human body works as an integrated whole. The aims of the Physiome Project are meant to be quite influential, as outcomes are projected to impact (in a positive way) the everyday roles of physicians, biochemists, chemists, and bioengineers. To this end, a popular example involves the field of cardiology, where efforts to better understand potentially fatal heart rhythms are ongoing.
For the contest:
If we consider what it means to search for scientific knowledge, we must ask ourselves just what exactly is science and research, and what is meant by the term scientific method? Thus, students who address this specific question must summarize the articles “Creation Conversation: The Turning Point” posted by the Institute for Creation Research and “Science or the Bible?” posted by Answers in Genesis. You are also encouraged to research additional sources — please cite the sources for your argument.
Question #8: This question deals with cosmology.
In the 2018 article called “A Case for Cosmological Redshifts,” astronomer Danny R. Faulkner reviewed at length three important discoveries in cosmology: the Hubble relation, the expansion of the universe, and the redshifts of quasars. Moreover, Dr. Faulkner explored the historical ramifications of these discoveries while emphasizing the development of a well-ordered cosmological model of the universe, especially regarding the construction of a correct biblical cosmology. As a result of the comprehensiveness and utility of the review, Dr. Faulkner’s article is recognized as a valuable resource for helping many of us better understand the observational science that has taken place for close to a century now on the expanding, observable universe. For those of us who are not astronomers, this is something of particular importance.
Dr. Faulkner sheds light on three issues
Issue 1 – Is the universe expanding?
Accordingly, the most straightforward interpretation of the data is, “yes,” the universe is expanding. Importantly, Dr. Faulkner considers the significance of the Hubble relation — named after Edwin Hubble from his 1929 paper “A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae.” This relation was derived from Hubble’s original observations using the Hooker 100-inch telescope at the Mount Wilson Observatory in California to plot a data set of 24 galaxies in terms of a galaxy’s redshift (expressed as velocity per second) compared to its distance from the earth (with distance expressed in parsecs). It turns out that a linear relationship exists, i.e., the Hubble relation, such that the larger the magnitude of a galaxy’s redshift, the greater the distance between the earth and that galaxy. In fact, Hubble’s plotted data set is considered the original observational framework for showing expansion of the universe.
Remark: The Hubble relation confirmed early expanding universe predictions based on general relativity (one of the most successfully tested theories of all time). In point of fact, prior to the publishing of Hubble’s paper, cosmologists used the theory of general relativity to predict that the universe would be either expanding or contracting.
Issue 2 – Does the redshift of light from other galaxies indicate distance?
Similarly, the most straightforward interpretation is, “yes,” redshifts associated with other galaxies indicate distance. Observationally, it must be mentioned that this specific form of redshift occurs because of the stretching of space, and we say it is cosmological in nature. (More information on cosmological redshifts is presented below.)
Issue 3 – Does the redshift of light from quasars also indicate distance?
From Dr. Faulkner’s review of evidence compiled over the last 50 years, this answer is a compelling “yes.” Here, too, the redshift is also cosmological in nature, which means it likewise occurs as a result of the stretching of space. (Once again, information about cosmological redshifts is presented below.)
Difference between Doppler motion and cosmological redshifts
Doppler motion describes the shift of light out of the visual range and into either the bluer ultraviolet range (for light from objects that travel toward you) or the infrared range (for light from objects that travel away from you). More precisely stated, the wavelength of light from objects traveling toward you is compressed, and the wavelength of light from objects traveling away from you is stretched. Therefore, because everything in space is always moving (including our solar system), the light from any light-emitting object in the observable universe is the combined sum of its Doppler motion (either blueshifted or redshifted) and its cosmological redshift (due to the inherent stretching of space away from us).
Quasars were initially discovered in the early 1960’s, but it took astronomers some time to figure out what they were seeing. In fact, the term quasar means “quasi-stellar object.” According to Dr. Faulkner, a quasar is a point in space that has more luminosity than that of an entire galaxy (trillions of stars). Though described as a “point in space,” quasars are roughly the size of our solar system compared to the enormous span of a galaxy, such as our very own Milky Way Galaxy or perhaps the “near-by” Andromeda Galaxy. One of the most popular quasars is known as 3C 273; however, a quasar called 3C 454.3 is one of the most luminous known objects in the universe. It is thought today that quasars are powered by super-massive black holes.
Figure: Quasars emit across the entire electromagnetic spectrum.
Notably, quasars emit across the entire electromagnetic spectrum, from radiowaves to x-rays. In fact, they were first discovered through radiotelescopes. However, as already mentioned, the most distinguishing feature about quasars is their high luminosity (since their peak emission is in the near ultraviolet and optical ranges). It is also important to note that quasars have a large redshift that is cosmological in nature, and thus quasars are very distant objects. More importantly, this combination of high redshift and vast distance equates to a large look-back time (and we may consider look-back time as our window into the observable universe).
Today, quasars are recognized as early galaxies, and are described or noted as galaxies characterized with an active galactic nucleus (AGN). Interestingly, our understanding of galaxies is presently viewed as a continuum that ranges from the most active AGNs (such as quasars) to the so-called normal galaxies (such as the Andromeda Galaxy or the Milky Way).
For the contest:
Option 1: Because much of astronomy involves the collection of light, in your own words please describe the electromagnetic spectrum, the duality of light, the concept of spectroscopy, and the notion of redshift. Note that a useful web resource with which to learn more about light and spectroscopy is the Learning from Light Educational Home Page presented by William “Bill” P. Blair of John Hopkins University. And another good web resource, one that focuses entirely on the electromagnetic spectrum, is Tour of the Electromagnetic Spectrum developed by NASA. Finally, a brief outline comparing so-called normal galaxies to the most energetic active galactic nuclei (i.e., quasars) is also encouraged.
Option 2: Please research, compare, and contrast the young earth cosmological models of the universe according to a biblical creation account (i.e., the history and development of the young earth creation cosmogony). Also, please research why the moon is slowly spiraling away from the earth, and then comment accordingly on how this presents a conundrum for evolutionists
– Historical Perspectives –
Historical perspective #1: A topic that deals with mathematics and education.
Mathematics offers its simplest and most elegant use in the mere act of counting, but as a school of thought and field of study, we may define mathematics itself as the fundamental language of science. In this light, this topic touches upon the concept of truth defined through the medium of mathematics and the inherent, absolute truth we receive through Scripture.
Here the intent is two-fold for students choosing this topic:
To develop deeper insight into mathematics
To form — from a Christian perspective — a more well-rounded foundation for the study of mathematics, and correspondingly, its intuitive applications in the sciences
For the contest:
Part 1: Students must review and in their own words summarize the 2005 article “Reflections Upon the Relationship Between Mathematical and Biblical Truth” by Dale L. McIntyre, published in the Journal of the Association of Christians in the Mathematical Sciences.
Remark: In the article’s introduction — see the opening statements — Professor McIntyre quotes the title of a 1997 book The Outrageous Idea of Christian Scholarship by historian George M. Marsden. Notably, Marsden makes a case for the Christian perspective in education, culture, and society by advocating for the believing Christian. Effectively, Marsden argues that Christian scholarship is really not such an outlandish notion, and goes further, in fact, claiming that such scholarship (and perspective) is indeed beneficial.
Part 2: In addition, students should also include a subsection in their essay that expresses their views and opinions concerning the impact of a Christian education on a biblical creation worldview.
In 2016 the Journal of the Association of Christians in the Mathematical Sciences changed from being an online journal to refereed conference proceedings.
Historical perspective #2: Students who choose this topic will compose a short biography on the life and work of the Swiss mathematician Leonhard Euler (1707-1783).
For students who pick this topic, please compose a short biography on the life and work of the Swiss mathematician Leonhard Euler (pronounced OY-lur or OY-ler). Please pay special attention to and include the following:
Euler’s theological views on God as Creator
Merely as an intriguing example of Euler’s work, discuss the equality known as Euler’s identity
Remark: Students should find value in reviewing the 2006 article “The God-Fearing Life of Leonhard Euler” by Dale L. McIntyre, published in the Journal of the Association of Christians in the Mathematical Sciences.
In 2016 the Journal of the Association of Christians in the Mathematical Sciences changed from being an online journal to refereed conference proceedings.
Historical perspective #3: Students choosing this topic will write about the influences of the Greek language and the Roman roads on the timing and spreading of God’s Word.
Students who choose this topic should discuss the era of widespread adoption of the Greek language across the eastern Mediterranean (zeroing in on lands encompassing modern-day Israel).
Take note of the history of how Greek came to be used in this region: its significance, to what extent the language was spoken/written, and over what period of time it flourished.
Focus your response on the form Koine Greek, mentioning the Attic and Ionic dialects it was based on.
Briefly talk about the specificity and precision of Greek: highlight the five parts (or aspects) of Greek verb usage — person, number, tense, voice, and mood.
In addition, include your views on whether or not the Roman occupation of this region discouraged or encouraged the use of the Greek language (i.e., whether or not the Romans allowed Greek as a common language, also known as a trade language).
Students should discuss how this topic relates to the “timing” of God’s Word, and what significance, if any, the Roman roads may have had on the spreading of God’s Word from the Mediterranean Basin.
And, finally, with respect to origins, please read “Have We Misunderstood Genesis 1:1?” by Dr. Joshua D. Wilson and comment on what “in the beginning” means to you. (We note that here Dr. Wilson draws the reader’s attention to the influence of the Septuagint — the Greek Old Testament.)
Remark: The Septuagint is the Greek translation of the Old Testament (the Hebrew text), and is recognized as the oldest complete version of the Old Testament. In fact, completed by Jews fluent in Greek and Hebrew in the third century B.C., the Septuagint was considered an authoritative text for Jewish people well into (and beyond) the first century A.D., including Jesus Himself and the New Testament writers. Moreover, Dr. Wilson recently wrote a follow-up article on “beginnings,” published in Answers Research Journal, called “Linguistic Traits of Hebrew Realtor Nouns and Their Implications for Translating Genesis 1:1.”
Submission Guidelines for High School and College
Papers will be grouped and judged according to their entry category: high school or college.
- Topics may be drawn from the aforementioned questions selected by the editorial team in the areas of science, math, language, or education.
- The category of historical perspectives provides opportunities to focus on intriguing events, people, or schools of thought (and explain the impact on creation science).
- In fairness for the varying topics, papers may range in length per students’ discretion from a minimum of 2000 words (i.e., 8 double-spaced pages) to an upper maximum of 4000 words (i.e., 16 double-spaced pages) in 12-point Times New Roman font.
- Papers should be submitted as a single PDF file, including a cover page with title, name, school (or home school), supervising instructor (if applicable), email address, and permanent postal address.
- Following the cover page, please use four major sections: Title Page, Abstract (between 150 and 200 words), Main Body, and References.
The title page should include the title of the paper, the author’s name as well as the supervising instructor’s name (if applicable), and the school (or home school). Also, each page, beginning with the title page, should have a shortened “running title” left-aligned in the header, with page numbers on the right.
The abstract provides a short description of the main body, or findings, of the paper. This section should be titled “Abstract.”
The main body should begin with an introduction. The introduction should supply adequate background information, and it should end with one or two well-constructed sentences describing the subject matter that the paper will cover (i.e., the direction the paper will take).
The reference page should start at the top of a new page, and this section should be titled “References.” (Please refer to the next bullet point describing sources, and the desired citation and reference style for papers.)
- Sources should be referenced and cited in APA style. If needed, please see the APA Formatting and Style Guide from Purdue Online Writing Lab for more information on APA citation and referencing.
Papers should draw from credible sources. Credible sources include but are not limited to books and textbooks, print or online periodicals or journals, as well as web-based columns, such as newsletters or magazines, that cite their material.
Please don’t plagiarize. Make every effort to properly cite and reference your sources.
- Spell out books of the Bible. If a need to abbreviate should arise, please avoid two-letter abbreviations. For example, Isaiah should be abbreviated Isa., not Is. The only exception is the book of Psalms, which should be abbreviated Ps.
- Please submit an electronic copy to the provided address.
- Individually-authored papers only please.
Submission Deadlines for High School and College
The deadline for submission is May 1, 2020.
Esteemed High School and College Papers Receive Informal Publication
Authors of the top two high school papers as well as the top two undergraduate papers will be offered the opportunity to work with our editorial team. This collaborative effort allows authors to grow familiar with the editorial steps related to publication, given the fact that their papers will be posted to our archive page (whereby papers are simply identified by the paper or article title, the author’s first name, and the city and state).
Five broad-based areas of research are identified:
- Science (i.e., “observational/experimental science” and “historical science”)
- Mathematics and the application of mathematics in the sciences
- Language and linguistics
- Education and teaching techniques
- Historical perspectives
Areas of interest include but are not limited to the following:
Apologetics, archaeological measurements, archaeology, astronomy, biochemistry, chaos theory, climatology, cosmology, creation apologetics, earth science, entropy, experimental techniques, Fourier analysis, fractals, genealogy, geology, Global Flood, gravitational physics, greenhouse gases, harmonic analysis, heat equation, heliophysics, historical science, Ice Age, life science, lineage of Christ, linguistics, magnetic fields, medical physics, medical research, memory, meteorology, mutations, oceanography, ontological argument, paleontology, patterns, physiology, planetary motion, planetary science, quantum mottle, quantum physics, scientific method, Scripture exegesis, seismology, self-similarity, signal processing, speciation, thermodynamics, volcanology, wave equation, wisdom, Word of God.