Depression can stop people from performing normal daily activities. Even the simple conversation can sometimes be a problem. Also, going to work is hard when a person suffers from depression or anxiety. Such conditions are not pleasant and remove the joy from people’s life. Along with anxiety disorder usually comes panic disorder that disables the person even more. If you have problems with panic, start taking home remedies for panic attacks and soon you will see the change.
Home remedies are cheaper than the classical medication and in most of the cases work better. Indeed, they affect the body slowly, but safely. Classical medication often causes side effects, which can be avoided if taking natural remedies. Herbal teas, for example, are one of home remedies. Chamomile tea, lemon balm and valerian are well known natural remedies that sooth the nerves and relax. If taken with acacia honey, the results can come soon. Also, taking magnesium supplements may help since the magnesium is proven to solve problems with stress and anxiety. Then, changing the diet and eating only organic fruits and veggies along with nuts and cereals is useful because ill body needs nutrients in order to be healed. Reducing sugar and fat is a must in such condition.
How To Stop Panic Attacks By Few Simple Methods
Each one of us has some bad memories. But most of the people manage to cope with these memories somehow. Unfortunately, there are individuals that do not have enough strength to face with their fears or traumas from history, so they get panicked very easily. You will need courage to face your fears and stop panic attacks. Facing with our problem may cause you to fear even more, but you should take step by step and after awhile, you will be new person.
To face the fears, start writing a journal. Purchase small and nice notebook, take the pen and write your thoughts in that journal. Don’t be ashamed to write whatever comes to your mind because you will be the only one who will know about that journal. You can be the only person who will read that journal later, so don’t be afraid of writing every single fear you are facing with. Think about your emotions. What do you feel? If something is making you particularly sad or angry, write it down in your journal. You may write a diary every day and soon you will see that this is very useful therapy and it will help you to remove the fears out of your life. You should also check out this great reference site for ways to stop panic attacks.
There are so many people who patronize the computer brand HP, and many people become disappointed when they encounter HP ProLiant disk problems because of course, they thought this brand would be hassle-free and easy to use. Let us all face it, we can’t avoid disk problems. Whether it is HP ProLiant disk problems or something else. Disk problems are hard to avoid and also hard to resolve. Most of the times, technicians find a hard time trying to resolve this kind of issue with hard drives and they can’t be reused as well. This is how hard things can be when you mess up a hard drive. Things get even harder when you find out that they can’t be reformatted (like memory cards) and reused.
There are always instances that your computer will encounter G-raid logical disk issues. This kind of problem will lose all your files, especially those that are very much needed for your work or personal life. Although it can be retrieved with the help of a data recovery company, there is still a possibility that these files will never be recovered again. Thus, when there are G-raid logical disk issues, it should be checked as soon as possible in order to prevent losing all the files. Usually, logical disk problems exist when your computer does not turn on or when you hear a clicking, scratching, grinding or any strange noises. This will signify a damaged hard drive as it might be affected by an electrical problem or other external factors.
Furthermore, there are a lot of obvious signs of a damaged hard drive. This includes not able to save or open your computer system, as well as inaccessibility to your files. It can be beneficial to fix it all by yourself to lessen the expenses but if you are not an expert, better find an experienced technician who can handle your G-raid logical disk issues. This person will manage to recover your files, especially if you cannot handle it anymore.
When it comes to memory cards, it is always the best to be careful and take good care of your hard drive than try to look for a solution when it breaks or gets corrupted. If you own an HP device, you need to be extra careful because it is really hard to avoid having this system issue and if it breaks or gets corrupted, it will forever be damaged and can’t be used anymore. One of HP’s most trusted partners for fixing RAID disk problems is Hard Drive Recovery Group of Irvine, CA. See their site here.
What Makes RAID 5 Recovery Special Than Any Other Brands And Versions?
In the fifth version of the RAID’s recovery system, not only are they against the other brands of recovery system but also against their own brand but older version. Right now, a lot of people can’t help but compare the older versions to the fifth version of RAID. Some say that RAID 5 recovery is too much for them to handle and use so they stay with the fourth version. The company upgrades system for many reasons and one of the reasons is that the computer systems are upgrading as well and old versions do not fit the new versions of computer systems anymore, and the old RAID recovery systems are built for only old computer versions and they do not support the old one.
If anyone even tries to make it work with the old system, then the whole system might be corrupted or much worse, it can be permanently damaged and reformatting it could be the only way to save the computer but this time you have to remove the old recovery system of RAID. Just so everyone knows, RAID 5 recovery will be compatible with the current versions of computer systems now and the following ones because they built this new version of RAID recovery to adapt the system of the new ones.
Drug rehab refers to the processes involving medical treatment and psychotherapy to those who have an addiction to psychoactive substances. Drugs like cocaine, marijuana, prescription drugs and even amphetamines are some of the most commonly abused substances. Usually, the life of a person is compromised by these addictions and his only means that an individual’s life ceases to be normal. The aims of drug rehab are many. Most of these are based on the personal advantages that one reaps from getting such help. However, there are also legal benefits to choosing to go for help.
Usually, the finances of a person are ruined when one chooses to use drugs incessantly. Alcohol, amphetamines and cocaine are very expensive and it means that the person spends a lot of money in order to take care of the cravings which usually arise. Also, social and family problems are usually the result of drug addiction. Withdrawal problems and family problems are very common when one uses alcohol and amphetamines, for example. Wrong brushes with the law are common for people who abuse drugs. Drunk driving is a common cause of such problems for those hooked on alcohol. Hence, going for drug rehab will enable one to take back control of his life if the exercise proves successful.
What To Look For In A Drug Rehab Program
A drug rehab program must be workable. It must contain simple plans that can easily be followed by the drug addict. The first thing you need to do is to identify a drug rehab center that offers such a kind of program. Before you admit your patient, make sure that you have some knowledge concerning the programs in a rehab center. This can be done through consultation with the experts. Alternatively, you can decide to use the internet to get some more information concerning these programs.
A good drug rehab program must be able to extend right beyond the rehabilitation center. The healing process is not only confined in the four corners of the rehabilitation center. It extends beyond these walls. Whenever the patient completes the inside programs, he, or she must be offered a follow up program until there is total recovery.
Every program for a drug rehab patient must be geared toward recovery. This should be the main product for it. A drug addict must go through the whole program and emerge a winner that is addiction free. This therefore, means that these programs must be tested and tried, proven that they can work in the life of an addict.
You have heard stories about bloggers earning much from their blog sites and you want to follow their footsteps. However, you have to learn some tricks on how to make money blogging. First, choose a niche that will be interesting for people from all walks of life all over the world. In order for your blog site to earn, you need to build a huge readership and these readers are the people who need the information that you share in your blog. Second, choose topics that you find fun to write about. People read not just to learn something but also to be entertained.
Try to write blogs that your readers will love to read. Last tip on how to make money blogging is to optimize your pages. Use relevant keywords and make your pages search-engine-optimization friendly. Remember, the money that you will earn will depend on the number of people visiting your site and reading your blogs. This will be highly possible if your blog site occupies the first page in any search engine results. The more readers you have, the more income you will likely get. Take these tips and you will not need to worry about how to make money blogging.
Starting Blogs To Improve Business Value
People who run online businesses often write blogs in order to earn more money and advertise their brand, company or themselves. All types of businesses use blogs as their method to improve the financial situation. Also, interesting blogs that talk about life in general or about art and similar can drive traffic. Such blogs are not generally business blogs because they don’t speak about business but since they have many readers, advertisers may find such blogs interesting and ask them to advertise their brands by putting adds on a popular blog. So, how to start a blog and make money? Read here!
First of all, a person who wants to start a blog will need to decide what kind of blog he or she wants to have. There are blogs that only write about topics, but the most interesting ones are those that combine text with photos. High quality photographs will attract people and something people visit a certain blog just to see new photos. It is good to hire a photographer who will take photos that will make a blog more appealing to readers. The next step in opening a blog is deciding about its name. It should be simple and short but original. So, how to start a blog and make money is not that hard!
Choosing Topics: One Way On How To Write A Successful Blog
If you want to learn how to write a successful blog, you must be willing to spend more time thinking and deciding what to write about. Most internet users surf the internet in search of relevant information and blogs that are entertaining. Writing content that meets their needs can lure them to your site and spend time reading the posts and this will make you generate more traffic. In deciding what to write about, select topics that answer questions that begin in “what”, “how”, and “why”. Most internet users want to know basic processes or steps in doing something.
They also want to know about causes and effects, as well as explanations. Experience how to write a successful blog by providing answers to these questions. If you are explaining a process, including images that demonstrate it will add value to your site. Videos from YouTube can also be used to supplement your content or to make your point clear to your readers. Using images with the appropriate tags will also contribute to optimizing your pages. For bloggers who are serious and committed to their goals in blogging, learning how to write a successful blog will always be an enjoyable and productive experience.
The charge leveled against metrication as a “Stalinist” imposition by the scientific establishment on the public is easily dismissed. It is worth stepping back for a moment and understanding how modern science has chosen its units of measurement. There are seven basic units in the so-called International System, known in scientific circles by its French acronym SI. These are the metre, kilogram, second, ampere, kelvin, candela and mole. They measure, respectively, length, mass, time, electric current, temperature, luminous intensity and amount of substance in terms of its atomic weight. Then there are the secondary units derived from these and also forming part of the SI, such as volt to measure voltage, newton (force), joule (energy), watt (power) and so on. The units may be combined to form compound units, such as metre per second to measure speed or newton-metre to measure torque.
The SI units have the huge advantage of mutual consistency and compatibility within all the mathematical formulas in all the sciences, and also have unambiguously assigned names. To take a simple example, consider the formula E = (1/2)[mv.sup.2], expressing the kinetic energy E of a mass m moving at a speed v. To get the kinetic energy, you square the speed, multiply it by the mass, and halve the resulting product. If m is measured in kilograms and v in metres per second, then E is automatically the correct number of joules. Thus a 1000 kg car moving at 20 metres per second (about 72 kmh) has a kinetic energy of about 200,000 joules.
Now, for contrast, consider the same calculation in the older “foot-pound-second” (or “British”) system of measurement. The mass would be measured in pounds and the speed in feet per second. The energy would then be given in the awkward units called foot-poundals, and to make matters worse, was often converted to another unit called foot-pounds-weight. A “pound-weight” was the force exerted by the earth’s gravity on a mass of one pound, and a pound-weight was roughly 32 poundals. Why should the earth’s gravity have anything to do with the formula E = (1/2)[mv.sup.2], which holds true on the moon, for example, or in outer space? In fact the formula has nothing to do with it, but confusingly this older system of measurement seemed to imply some connection between kinetic energy and gravity. Moreover, in this older system there was no greater freedom to choose one’s units to conform to some particular scale of immediate convenience than exists in the present SI. Thus, in the kinetic energy example, an incorrect answer would be obtained if the speed were measured not in feet per second but in miles per hour, furlongs per fortnight or whatever, or if the mass Was that of a hundred-ton aircraft and entered into the formula as “100″. The older system is in fact more accurately referred to in the plural — there were, for historical reasons, many old systems with, understandably, little in the way of mutual consistency.
I CAN VOUCH PERSONALLY for the great simplification afforded by the SI. If this simplification is what Padden refers to as “window-dressing”, he is seriously deluded. As a high-school physics student in the fifties I remember having to use conversion factors to go from calories per gram to British Thermal Units per pound, and vice versa. In studying electricity and magnetism, there was a constant risk of muddling electrostatic, electromagnetic and what were then known as the “practical electrical units”, with various conversion factors from one set to another that depended on the physical quantity in question — electric current, voltage, magnetic strength or whatever. So a student of physics was obliged to divert a substantial part of his effort into developing what amounted to a meretricious linguistic agility, having little to do with achieving a good grasp of physics.
When very large or very small quantities are measured, a wide range of prefixes are available for attaching to the basic SI units. Thus from metre one can go down to millimetre, micrometre (one-millionth of a metre), nanometre (one-billionth), and up to a kilometre. The kilojoule (1000 joules) is of course now a familiar term in dietary contexts, as megajoule (one million joules) is on our gas-heating bills. (Electricity bills often state consumption in kilowatt-hours, but a kilowatt-hour is exactly 3.6 megajoules.) Padden’s comment that metre is “totally inadequate to cope with all situations in which length is measured” is incorrect; one could in fact say the same of foot or inch — how many inches long is a human blood cell?
In both the teaching and practice of science, the SI is much better than any preceding system of units. There is no evidence that metrication, the public face of the SI, has ever been more than a very minor nuisance to the general public. Since the teaching and practice of science entail as a natural consequence the delivery of its fruits to the public — in one word, technology — it would make no sense to use the SI in the former case but more traditional units in the latter. Why should (for example) research into the efficient generation and distribution of electricity be conducted using the SI, but our electricity bills state consumption in British Thermal Units?
One could point to many cases where the public accepted without uproar the changes from traditional to metric units that occurred in the past few decades. When weather forecasts in Australia started quoting temperatures in degrees Celsius rather than Fahrenheit, there was admittedly a period of adjustment, but we very quickly learned to recognise that, for example, 18 [degrees] C was cool and 28 [degrees] C very warm. Similarly for vehicle speeds: all good drivers recognise 40 kmh as a low speed mandatory in school zones and the like.
So far from being a “Stalinist” imposition on the public, metrication always willingly takes a back seat during special events with their own traditional ways of measurement; thus wind and yachting speeds are reported in knots during the Sydney-Hobart race and nobody objects. In other contexts, distances are still often given in miles rather than kilometres, and there is no danger that this older measure of large distances will be forgotten. These and similar examples amount to a demonstration of respect for the public which is uncharacteristic of totalitarian regimes. There is even evidence that if a metric unit should coincide, fortuitously, with a traditional unit, then a similar name is given to the metric unit: 1000 kilograms is called a tonne because it differs by less than 2 per cent from the traditional ton.
HAVING AIMED at metrication — without, I assert, leaving a single dent — Padden next broadens his target to include the whole of modern science. He seeks to persuade us that corruption in the medieval church has its present-day analogue in pop science, defence funding and the publish-or-perish syndrome. Of this diabolical trio, the first is best represented by such outstanding popularisers as the late Jacob Bronowski and Isaac Asimov, and more recently by such expert practitioners and interpreters as Paul Davies, John Gribbin, Stephen Hawking and Roger Penrose, to name only a very few, and the pejorative term “pop science” does not do justice to their contributions. The second item, defence funding, may well have inflated some areas of research at the expense of others, but if it has played any part in the warless abolition of the Soviet Union I have no great quarrel with it. Publish-or-perish can be found anywhere in the universities or other research bodies and is by no means restricted to science.
Comparing modern science with the medieval church will, however, fail for a deeper reason: science works. The particular areas of intensive research may change from time to time, depending on the public interest as perceived by money-granting politicians or on the changing tastes of new generations of scientists or on new and exciting advances in a particular field. But the rules of scientific research will not change: new results and new theories must be tested against experiment, repeatedly so by different teams, and are accepted as “true” only provisionally and after many attempts to falsify or disprove them. A new theory is not regarded as meaningful unless it predicts further testable results. This objectivity and predictive power at the heart of science are the reason that Padden’s calls for a “more heterogeneous, more democratic, richer science” are no more intelligible than the proclamations one reads from some feminist writers that the “scientific paradigm” will change for the better when more women practise science.
What we know about nature is, therefore, hard-won knowledge. That this is “true” knowledge in the sense that, for example, astrology is not, was proven when (to take just two examples from a vast number of possible examples) transistors replaced vacuum-tubes (so eventually making possible very fast and small computers), or when lasers became routinely used in eyesight-saving surgery. Both these technologies depend on hard-won understanding of quantum mechanics and solid-state physics.
Some years ago a luminous event occurred in the public presentation of science. It followed the explosion of the Challenger space shuttle in January 1986 barely more than a minute after lift-off, killing all seven of the crew. The Reagan administration appointed an investigatory commission which included the Nobel laureate in physics Richard Feynman. The commission’s attention soon focused on the rubber O-rings which were used to seal the joints holding together the several sections of the shuttle’s solid rocket boosters and whose resilience was critically important.
At ordinary room temperatures rubber is indeed resilient — its best-known attribute. However, if cooled sufficiently, rubber hardens and the O-rings would cease to function as seals, allowing in the case of the shuttle the lethally dangerous escape of hot gases. The weather had been very cold at the time of lift-off, with ice forming on the launching pad.
The Radiation Imaging Group at the University of Surrey is one of the leading European academic centers for radiation detector development. Its research is focused on the development of new sensors and systems for radiation imaging. The group’s goals are primarily to image the growth and development of plant seeds in order to understand and breed newer and better plants for use in developing countries. Its research has far-reaching implications for industries as varied as forestry, ecology, entomology, agronomy, environmental science, pharmaceuticals, and process control engineering.
“The ability to non-destructively image a cross-section through a sample is a very powerful technique” said Dr. Paul Jenneson, a postdoctoral research fellow in the Radiation Imaging Group. “The potential users for such a system are too numerous to mention individually.”
The Radiation Imaging Group was also the first organization to visualize the germination of a wheat seed in its native ferrous soil. “We don’t want to interfere with the environment, and we want to study the plants over a period of time to see how they develop. The plants, therefore, need to be imaged in their iron-rich soil environment, which renders Magnetic Resonance Imaging ineffective” stated Dr. Jenneson. “In addition, any invasive tactic, even with the most delicate preparation or cutting methods, can cause the specimen structure to change dramatically, thereby degrading research efforts. Thus, we selected to use x-ray micro-tomography as our method of study.”
For the micro-tomography research, the group designed and built a low-dose hardware system. “We believe our system to be the most carefully optimized micro-tomography system for low radiation dose imaging in the world,” asserted Dr. Jenneson. The system is built of commercially available units but the combination and arrangement are unique. The system includes a Hamamatsu x-ray image intensifier as a detector, and an Oxford Instruments x-ray tube as an x-ray source. The motion needed to do tomography is provided by Time & Precision stepper motors and controls.
“For our purposes, the projection, or radiograph, images are reconstructed into cross-section images using a filtered back-projection routine,” explained Dr. Jenneson. “We then needed something that would allow us to process and visualize the final three-dimensional data set, which is basically a stack of cross-section images, and display them in a meaningful way on a 2-D VDU display. We also wanted it to be accessible on Windows NT, Unix and Linux systems.”
The group considered several software packages before selecting IDL from Research Systems. Inc. “IDL allows us to do many terrific things with our data,” said Dr. Jenneson. “Prior to using IDL, the only way to view the tomographic data was with 8-bit gray scale two-dimensional cross-sectional images. IDL has been enormously helpful to us in that regard. Our data are processed using routines such as LABEL_2D and HISTOGRAM. The CONGRID interpolation provides a very fast cubic-interpolation routine, which we use during tomographic acquisition. We also use the three-dimensional visualization routines in `Slicer3,’ and frequently create iso-surfaces in IDL to visualize the three-dimensional data sets.”
“We also love how quickly and easily we can create new routines utilizing the existing library of routines with the simple syntax provided in the IDLDE. The help is the best source of help I have come across for any program,” said Dr. Jenneson.
The data sets are currently 256 x 256 x 256 cubes of double-precision data. “We have the hardware capability of obtaining 2048 x 2048 x 2048 cubes of data,” explained Dr. Jenneson, “but the data sets are limited in size by CPU and memory constraints.” Data can be output as either image files, such as GIF, JPEG or TIFF, or as a movies in MPEG format.
Because most users of the application are competent Windows users, but are not programmers, the group needed to build a GUI front-end to the application. “We used IDL’s ActiveX control, in combination with Microsoft Visual Basic, to develop a friendly graphical user interface. The use of Visual Basic allows us to embed other commercial ActiveX components into the same program, which keeps us flexible. The open architecture also allows us to seamlessly integrate the tomographic hardware control with the powerful image processing routines provided by IDL.”
Although the Radiation Imaging Group’s research uses x-ray micro-tomography to visualize the growth of plant roots, the system the members have developed can be used for a number of different non-destructive cross-sectional imaging challenges. “As well as being able to image developing plants, the micro-tomography system can be used in a number of other studies,” continued Dr. Jenneson. “For example, the study of small invertebrates has become a bit of a showcase for demonstrating the spatial resolution obtainable with x-ray micro-tomography. The structure of such creatures is on a ideal scale to demonstrate the benefits of the technology.
“Micro-tomography can also be used for process control applications in the food and pharmaceutical industries, where the nondestructive imaging of a product on the 100 micrometer scale can yield some very valuable information. For example, one can study a product’s internal structure to better understand how a sample is packing or settling in a container. In the pharmaceutical industry, the production of capsular pills, which are comprised of a dissolvable outer case containing a powder pharmaceutical, can be monitored to ensure the capsule actually contains the proper amount of powder. The capsule system can also be studied over time to assess the `packing’ and any degradation in the powder,” he said.
The Radiation Imaging Group hopes that the x-ray micro-tomography system and accompanying research will be used by soil scientists and agronomists to develop more advanced crops and create a reference database of current plant root systems. “We have high hopes for this type of system; the ability to non-destructively section, slice, visualize and analyze 3-D data on such a minute scale opens up new opportunities for many industries and areas of study. And IDL has played a very significant role in that progress,” concluded Dr. Jenneson.
The development of chromatographic methods requires definition of the appropriate conditions by which a particular separation can be executed. The resulting method will include details regarding the column type, the solvents, the solvent gradient, buffers and so on. Generally, at least a hard copy of the chromatogram will be filed with the method with each peak in the chromatogram annotated with a textual identifier of the related chemical structure or, alternatively, with a hand-drawn or copy-and-pasted structure. This approach to the management of method details is incomplete and hardly sufficient in a global corporate environment where analytical details need to be exchanged between different laboratories. Similarly, even though millions of dollars are invested annually in the installation and maintenance of molecular structure databases, there have been few attempts to provide links between the thousands of chromatograms generated on an annual basis with the chemical structures identified within associated analytical laboratories. As a result, even though an abundance of information exists in regards to appropriate separations and methods related to particular classes of chemical structures, little of this is easily accessible and therefore of little use for future method development.
While attempts have been made to retrieve data based on text descriptions of chemical structures, this is a dangerous practice. Standardized naming conventions exist, but may prove difficult to apply accurately. This is further complicated by the presence of IUPAC and CAS naming conventions, as well as the IUPAC nomenclature’s acceptance of semi-common names such as steroid derivatives. Based on this, it is quite possible that a search for a correctly entered IUPAC name will fail to retrieve a required entry. Searches for functionality are much more problematic. Retrieval can be attempted using a text-string method. In practice, the huge number of potential strings associated to one fragment of a structure make this method practically unusable. Except for the simplest searches, only utilizing structures enables the user to find entries based on functionality.
Chromatographers, like spectroscopists, utilize their technology both to separate and to identify chemical structures. It is common in today’s analytical environments to find teams composed of people with both skillsets to generate optimal separation and analysis solutions. Spectroscopists assign their spectra in relation to chemical structures using parent ion mass analysis or fragmentation analysis in mass spectrometry, nucleus to peak assignments in NMR and vibrational bands to IR peaks. Commonly, spectroscopists have utilized the standard filing system of drawers full of spectra with an association of the file number with some textual identifier in order to locate the detailed knowledge extracted from the spectra at a later date. The general level of spectral management has been limited to handwritten notes in notebooks or sometimes text-searchable databases pointing to associated spectra. It is only during recent years that tools have become available to allow spectra to be databased in electronic format with associated chemical structures. In this manner the spectroscopist has inherited the opportunity to search the database for related structures or substructures, or spectral features when performing fresh analyses. This approach allows the generation of a legacy database of multiple spectroscopy data thereby building a foundation for future analyses. The value residing in such tools is the time-savings that result for the analysis of related chemicals and the exchange of information between different analytical laboratories within the same company. In theory, such an approach should not be isolated to spectroscopists.
For chromatography, tools now exist to allow the similar integration of chromatographic peaks and chemical structures. The software application described here is ACD/ChromManager. Unlike in spectroscopy, a single peak in the chromatogram is associated with a single chemical structure, or multiple if species coelute.
The development of a toolkit to allow the association of chemical structures with a chromatogram and ultimately databasing of the resulting information requires a number of specific features. In particular, processing of an experimental chromatogram will require the standard tools for peak picking, noise removal, baseline correction, smoothing and peak integration, as well as advanced tools such as deconvolution. Since there are many chromatography hardware system vendors, the ability to read in raw formats or standard ASCII or AIA is necessary to allow laboratories with non-homogeneous environments to database their information in a consistent manner.
Afterwards, the chromatogram should be available for printing, as well as for transfer to other tools for reporting (using standard word processors and graphics programs). Object linking and embedding has become a standard technology for passing objects between programs. This has been implemented to full effect in ChromManager, which allows direct integration with Word, Powerpoint and other applications. In order to attach chemical structures to the chromatography report, the CDS should be integrated with a chemical structure drawing system. For the system described here, ACD/ChemSketch, an application for generating molecular structures, is directly integrated. This application also allows the import of standard file formats such as Molfile from other applications, again supporting non-homogeneous software environments. Following the processing of the chromatogram, a peak is selected and one or more structures are directly associated. Continuing this process across the whole chromatogram, structures are attached one by one to appropriate peaks, integrating chemical connectivity. An example screen of the resulting file is shown in Fig. 2. Moving the cursor across each chromatographic peak will show each associated structure(s).
The resulting chromatogram with associated chemical structures carries valuable information for future applications. Such resulting files can be stored onto a centralized server and can become a powerful means for dissemination of the chromatogram-structure connectivity information. For example, a copy of ACD/ChromProcessor (ChromManager without the databasing capability) can be distributed to each chemist’s or chromatographer’s desktop with access to the centralized server where textual methods and associated chromatogram-structure (ESP) files reside. This general approach can be expanded to a World Wide Web-intranet approach, whereby the methods are posted as individual HTML pages with hyperlinked ESP files. When the methods are searched textually, the associated ESP file can be downloaded for viewing in the ChromProcessor helper application. A series of such information-rich chromatograms forms a valuable basis for method development and rapid chromatographic condition identification. These approaches, though valuable, have the constraints associated with most CDS systems: searches are primarily text based.
However, the ACD/ChromManager application allows each chromatogram to be databased with associated chemical structures, thereby offering significantly enhanced capabilities over the common file systems used today in many laboratories. Prior to databasing of the chromatogram users can edit and update the sample data, instrumental data, detector data, elution data and column parameters.
The capabilities of database technology enhance searching capability over the standard filing cabinet system or text based databasing system. It is possible to search the resulting databases by structure, substructure, formula, molecular weight, chromatographic parameters or user data. User data includes the creation of up to 16,000 user-definable database fields with particular field labels such as example submitter, project name, and type of analysis–all of which become searchable fields. Multiple databases can be searched at one time, allowing different databases to be constructed according to column, project name, individual user, and other parameters. These multiple databases can also be distributed across different departments, divisions or even an entire corporation, simply by using the ability to point to databases located on mapped network drives.
Highly trained scientists leave the country and seek employment abroad. Many of the younger generation seek work in fields where they can make a living but do not follow their training in science or technology. For a traditionally static society, the mobility of this young and promising generation is high. The older generation, well established in the former system, has great difficulty adapting to the new realities of life. In fact, a whole generation, the lost generation of this great period of social upheaval, is now in a very difficult state. It may gradually be displaced by the younger and more active modern generation, who are the real hope for our future.
In the organization of science, the traditional division between science and teaching has become a major issue. The government has stated that the cooperation of science and teaching should be pursued, but unfortunately, due to the conservatism of the whole system, it is very difficult to carry out these policies. The loss of the old ideology has led to a veritable vacuum of ideas, an emptiness and lack of meaning in life, having a deep effect on the young and expressed in the morals of society.
It is in these conditions that we should examine the state of science and pseudoscience in Russia. In the former system there was not much room in such a highly controlled society for pseudoscience. But in the last years of the ancien regime pseudoscience emerged, mainly in the guise of astrology, parapsychology, quack medicine, and similar manifestations. The authorities themselves had not only lost control, but in many occasions the practitioners of pseudoscience found support in the decaying system. Some were supported by the military, in bogus and secret projects. These events were clearly symptoms of a deep crisis, and any conscientious observer saw them as a precursor of things to come.
In the present conditions all controls have now gone, no censorship exists, and even the limits of decency are trespassed in the press and on television. The freedom to publish has led to a veritable flood of pseudoscience. Books on various alternative theories, ideas, and teachings arc on the market. With the revival of established traditional religions and much greater freedom, bizarre sects spread, especially among the young.
Pseudoscience is even observable in high levels of the academic establishment. A well-known mathematician is publicizing a new chronology of world history where there is no place for the Middle Ages and a thousand years of history are thrown out. These ideas are based on computer studies of manuscripts and astronomical data. In spite of a strong statement of the Academy of Science of Russia and of professional criticism by historians, these works are published and discussed in the mass media. Work on cold fusion and other marginal effects are supported and publicized, for the level of expertise and often the great persuasive power of these pseudoscientists leads to the support of their ideas. Where, then, are the limits to public debate and of professional honesty? Or is this all a transient phenomenon? Out of chaos will a new order finally come? These are not easy issues to resolve. Time and again the public is persuaded, if not fooled, on important matters of professional interest, often amplified by the media.
At the same time numerous pseudo-academies have been set up, from shamanism and black magic to seemingly more respectable headings like “information science” and others. They sound reasonable, but the professional standards practiced are very low and often are really attempts to institutionalize pseudoscience. Unfortunately, these groups manage to get support and capture the attention not only of the media, but also of some political bodies. At the same time, the Academy of Sciences, which is certainly the main body of science and should be the custodian of intellectual standards of a great cultural tradition, has had a very difficult time establishing and propagating its scientific and intellectual authority.
These conditions are only made more complicated and difficult by a lack of coherent science policy. Perhaps in these cases the last vestige of science is the professional honesty and integrity of scientists, who must face these adverse conditions. This is the real and effective factor that will permit science, as a social institution, to get through these difficult years. In these matters international recognition and collaboration are very significant. Of special importance is the support for Russian science by the INTAS collaboration and the Soros foundation based on external expertise. Academia Europaea has brought recognition and moral support to many of those who were at a loss in these years of transition.
On the other hand, it may be thought that these conditions, so manifest in Russia and multiplied by the social collapse, are also the result of a global intellectual crisis, through which European civilization is now passing. Many of these symptoms can be traced to the crisis of rationalism. The criticism of rational thinking and antiscience is not unknown in the West. In Russia we do not as yet have deconstructionism as an influential trend in philosophy, but hypocriticism and challenging conventional wisdom are part of the story. Now, after a few years of such critical approach to the past and present, those who were the most outspoken have failed to deliver any positive message. On a political level this is leading to a disillusionment with the ideas of democracy and the ideals of Western culture. It is now obvious in the arts, and perhaps in no field it is so noticeable as in cinematography. All this may seem to be rather far from science, but it certainly demonstrates the changes in social consciousness now happening, and the changing mores and values of the people.
The most unfortunate thing is that economic decline is leading to a marked shift to the right with the emergence of nationalistic mass movements. If these developments carry on, Russia may follow the example of the German Weimar Republic, a historic analogy that is worth remembering. Thus we see that the symptoms of the pseudoscientific crisis may signal a deep-rooted and socially dangerous development both for reason and democracy.
Finally, what are the real long-term and profound reasons for these irrational developments, the decline of reason at a time when the possibilities for development are so numerous and the promise of science so great? It may be assumed that in facing and, it is to be hoped, resolving these issues a global approach is really necessary. These general trends are hardly ever resolved by the sorts of reductionist explanations offered on a short-term cause-and-effect basis. Perhaps these events have to be seen in the larger perspective, in the longue duree of great structural changes in our growth and development. But here we are lacking the time scale to objectively observe these events of our ‘daily concern. Can this loss of relevance and bearings be due to the very rapid changes now happening in the globally connected world – when there is no time for the longer processes of culture to take place in a world overrun by numerical growth, and when evolution has no time to develop by trial and error?
Proponents of the Standard Big Bang (SBB) model exemplify the smiling side. They claim that predictions of this model have been verified and all observations are consistent with it. This sounds impressive, but how can one reconcile such an assessment with the fact that discussions at conferences and in journals remain heated over at least five major areas of disagreement between nature and the SBB model? These include the dark-matter problem (90 to 99 percent of the universe is unseen mystery stuff), the causality problem (no explanation for why the universe popped out of a singularity), and the age problem (stars that appear to be older than the universe itself).
For example, in the world’s premier astrophysics journal I have seen one article by a leading expert in nucleosynthesis claim that the amount of helium in the cosmos must fall within a certain range or the Standard Big Bang model is “falsified,” while another paper in the same issue reports helium observations that lie outside that range. Since the public is regularly told that the success of nucleosynthesis predictions provides an evidential cornerstone for the Big Bang theory, the uncertainty in the actual data must come as a surprise to many.
As an explanation, some cosmologists might argue that the level of technical difficulty and detail in their presentations must be tailored to the needs and abilities of each audience. Quite true, but the take-home messages for public and academic discourse should still be the same. The Big Bang theory should not be described as “correct” to one audience and “in trouble” to another.
So which is it: has the Big Bang paradigm been “proved” or “falsified”? Perhaps the best answer is that the SBB model, like all scientific paradigms, is an approximation – and by definition approximations cannot be completely right. It is correct to say that Newton’s theory of gravity is a relic (though one still used in space-flight calculations), but incorrect to assert that Einstein’s newer theory is the final answer. Both are approximations, with the latter’s view of space, time, matter, and gravitation being far more accurate, mathematically complex, and conceptually exquisite. In time some broader, deeper theory may eclipse them both.
Still, how does one account for the confident smile that the cosmologist shows to the public and the furrowed brow shown to colleagues? Perhaps it is rooted in the natural tendency to assure one’s patrons (the taxpayers) that everything is under control. Another possibility is that scientists like to be viewed as brilliant thinkers who have all the answers. A third is simply that everything gets hyped these days. And last, as Thomas S. Kuhn implies in his classic. The Structure of Scientific Revolutions, the scientist is gradually steeped in the prevailing paradigm until it becomes common sense, and other ideas sound and feel wrong. Eventually one’s professional status becomes linked to that of the prevailing paradigm – and yet his or her most cherished goal is to discover something radically new. Such is the intellectual split personality of the scientist.
In the long run forthrightness is crucial both to scientists and to their broader audience. Cosmologists, science writers, and their editors should scrupulously treat the Big Bang model as an approximation. Even if it is a reasonably accurate explanation for the observable universe, our purview may represent only the tiniest of blips in an unimaginably larger and more intricate universe. As perpetual students of nature, we should not feel the least bit embarrassed about this but rather be proud of the human struggle to comprehend. As Einstein put it, “All our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.”
Scientists do not struggle toward a “final theory.” We have seen the folly of “absolute certainty” often enough to know better. Cosmologists should remain restless, questioning, unsatisfied – openly admitting current weaknesses. Good scientific theories, like the Big Bang model, are steppingstones to a widening, deepening understanding of the cosmos. Undoubtedly there are exciting new paradigms that await exploration. Science evolves.