Wednesday, July 29, 2009

replication









DNA replication, the basis for biological inheritance, is a fundamental process occurring in all living organisms to copy their DNA. This process is "semiconservative" in that each strand of the original double-stranded DNA molecule serves as template for the reproduction of the complementary strand. Hence, following DNA replication, two identical DNA molecules have been produced from a single double-stranded DNA molecule. Cellular proofreading and error-checking mechanisms ensure near perfect fidelity for DNA replication.[1][2]

In a cell, DNA replication begins at specific locations in the genome, called "origins".[3] Unwinding of DNA at the origin, and synthesis of new strands, forms a replication fork. In addition to DNA polymerase, the enzyme that synthesizes the new DNA by adding nucleotides matched to the template strand, a number of other proteins are associated with the fork and assist in the initiation and continuation of DNA synthesis.

DNA replication can also be performed in vitro (outside a cell). DNA polymerases, isolated from cells, and artificial DNA primers are used to initiate DNA synthesis at known sequences in a template molecule. The polymerase chain reaction (PCR), a common laboratory technique, employs such artificial synthesis in a cyclic manner to amplify a specific target DNA fragment from a pool of DNA

Tuesday, July 28, 2009

recombination

Genetic recombination is the process by which a strand of genetic material (usually DNA; but can also be RNA) is broken and then joined to a different DNA molecule. In eukaryotes recombination commonly occurs during meiosis as chromosomal crossover between paired chromosomes. This process leads to offspring having different combinations of genes from their parents and can produce new chimeric alleles. In evolutionary biology this shuffling of genes is thought to have many advantages, including allowing asexually reproducing organisms to avoid Muller's ratchet.

In molecular biology "recombination" can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA.

Enzymes called recombinases catalyze natural recombination reactions. RecA, the recombinase found in E. coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination and the DMC1 protein is specific to meiotic recombination.

replication bubble

Replication bubble
Once the polymerases have opened the molecule, an area known as the replication bubble forms (always initiated at a certain set of nucleotides, the origin of replication).

meselson and stahl experiment

Meselson-Stahl experiment
From Wikipedia, the free encyclopedia
Jump to: navigation, search

A summary of the three postulated methods of DNA synthesisThe Meselson-Stahl experiment was an experiment by Matthew Meselson and Franklin Stahl which demonstrated that DNA replication was semiconservative. Semiconservative replication means that when the double stranded DNA helix was replicated, each of the two double stranded DNA helices consisted of one strand coming from the original helix and one newly synthesized.

Nitrogen is a major constituent of DNA. 14N is by far the most abundant isotope of nitrogen, but DNA with the heavier 15N isotope is also viable. The 15N isotope is not radioactive, only heavier than common nitrogen.

E. coli were grown for several generations in a medium with 15N. When DNA is extracted from these cells and centrifuged on a salt density gradient, the DNA separates out at the point at which its density equals that of the salt solution. The DNA of the resulting cells had a higher density (was heavier). After that, E. coli cells with only 15N in their DNA were put back into a 14N medium and were allowed to divide only once. DNA was then extracted from a cell and was compared to DNA from 14N DNA and 15N DNA. It was found to have close to the intermediate density. Since conservative replication would result in equal amounts of DNA of the higher and lower densities (but no DNA of an intermediate density), conservative replication was excluded. However, this result was consistent with both semiconservative and dispersive replication. Semiconservative replication would result in double-stranded DNA with one strand of 15N DNA, and one of 14N DNA, while dispersive replication would result in double-stranded DNA with both strands having mixtures of 15N and 14N DNA, either of which would have appeared as DNA of an intermediate density.

DNA was then extracted from cells which had been grown for several generations in a 15N medium, followed by two divisions in a 14N medium. DNA from these cells was found to consist of equal amounts of two different densities, one corresponding to the intermediate density of DNA of cells grown for only one division in 14N medium, the other corresponding to cells grown exclusively in 14N medium. This was inconsistent with dispersive replication, which would have resulted in a single density, lower than the intermediate density of the one-generation cells, but still higher than cells grown only in 14N DNA medium, as the original 15N DNA would have been split evenly among all DNA strands. The result was consistent with semiconservative replication, in that half of the second-generation cells would have one strand of the original 15N DNA along with one of 14N DNA, accounting for the DNA of intermediate density, while the DNA in the other half of the cells would consist entirely of 14N DNA--one synthesized in the first division, and the other in the second division. This discovery was hugely important for the development of biology and is a majory aid to the treatment of disease (e.g. cancer).

replication types

Initiation
The initiation of replication is mediated by a protein that binds to a region of the origin known as the DnaA box. In E. coli, there are 5 DnaA boxes, each of which contains a highly conserved 9 bp consensus sequence 5' - TTATCCACA - 3'. Binding of DnaA to this region causes it to become negatively supercoiled. Following this, a region of OriC upstream of the DnaA boxes (known as DnaB boxes) become melted. There are three of these regions, and each are 13 bp long, and AT-rich (which facilitates melting because less energy is required to break the two hydrogen bonds that form between A and T nucleotides). This region has the consensus sequence 5' - GATCTNTTNTTTT - 3. Melting of the DnaB boxes requires ATP (which is hydrolyzed by DnaA). Following melting, DnaA recruits a hexameric helicase (six DnaB proteins) to opposite ends of the melted DNA. This is where the replication fork will form. Recruitment of helicase requires six DnaC proteins, each of which is attached to one subunit of helicase. Once this complex is formed, an additional five DnaA proteins bind to the original five DnaA proteins to form five DnaA dimers. DnaC is then released, and the prepriming complex is complete. In order for DNA replication to continue, SSB protein is needed to prevent the single strands of DNA from forming any secondary structures and to prevent them from reannealing, and DNA gyrase is needed to relieves the stress (by creating negative supercoils) created by the action of DnaB helicase. The unwinding of DNA by DnaB helicase allows for primase (DnaG) and RNA polymerase to prime each DNA template so that DNA synthesis can begin.


[edit] Elongation
Once priming is complete, DNA polymerase III holoenzyme is loaded into the DNA and replication begins. The catalytic mechanism of DNA polymerase III involves the use of two metal ions in the active site, and a region in the active site that can discriminate between deoxynucleotides and ribonucleotides. The metal ions are general divalent cations that help the 3' OH initiate a nucleophilic attack onto the alpha phosphate of the deoxyribonucleotide and orient and stabilize the negatively charged triphosphate on the deoxyribonucleotide. Nucleophilic attack by the 3' OH on the alpha phosphate releases pyrophosphate, which is then subsequently hydrolyzed (by inorganic phosphatase) into two phosphates. This hydrolysis drives DNA synthesis to completion.

Furthermore, DNA polymerase III must be able to distinguish between correctly paired bases and incorrectly paired bases. This is accomplished by distinguishing Watson-Crick base pairs through the use of an active site pocket that is complementary in shape to the structure of correctly paired nucleotides. This pocket has a tyrosine residue that is able to form van der Waals interactions with the correctly paired nucleotide. In addition, dsDNA (double stranded DNA) in the active site has a wider and shallower minor groove that permits the formation of hydrogen bonds with the third nitrogen of purine bases and the second oxygen of pyrimidine bases. Finally, the active site makes extensive hydrogen bonds with the DNA backbone. These interactions result in the DNA polymerase III closing around a correctly paired base. If a base is inserted and incorrectly paired, these interactions could not occur due to disruptions in hydrogen bonding and van der Waals interactions.

DNA is read in the 3' → 5' direction, therefore, nucleotides are synthesized (or attached to the template strand) in the 5' → 3' direction. However, one of the parent strands of DNA is 3' → 5' while the other is 5' → 3'. To solve this, replication occurs in opposite directions. Heading towards the replication fork, the leading strand in synthesized in a continuous fashion, only requiring one primer. On the other hand, the lagging strand, heading away from the replication fork, is synthesized in a series of short fragments known as Okazaki fragments, consequently requiring many primers. The RNA primers of Okazaki fragments are subsequently degraded by RNAse H and DNA Polymerase I (exonuclease), and the gap (or nicks) are filled with deoxyribonucleotides and sealed by the enzyme ligase.


[edit] Termination
Termination of DNA replication in E. coli is completed through the use of termination sequences and the Tus protein. These sequences allow the two replication forks to pass through in only one direction, but not the other. However, these sequences are not required for termination of replication.

Regulation of DNA replication is achieved through several mechanisms. Mechanisms involve the ratio of ATP to ADP, of DnaA to the number of DnaA boxes and the hemimethylation and sequestering of OriC. The ratio of ATP to ADP indicates that the cell has reached a specific size and is ready to divide. This "signal" occurs because in a rich medium, the cell will grow quickly and will have a lot of excess DNA. Furthermore, DnaA binds equally well to ATP or ADP, and only the DnaA-ATP complex is able to initiate replication. Thus, in a fast growing cell, there will be more DnaA-ATP than DnaA-ADP. Because the levels of DnaA are strictly regulated, and 5 DnaA-DnaA dimers are needed to initiate replication, the ratio of DnaA to the number of DnaA boxes in the cell is important. After DNA replication is complete, this number is halved, thus DNA replication cannot occur until the levels of DnaA protein increases. Finally, DNA is sequestered to a membrane-binding protein called SeqA. This protein binds to hemi-methylated GATC DNA sequences. This four bp sequences occurs 11 times in OriC, and newly synthesized DNA only has its parent strand methylated. DAM methyltransferase methylates the newly synthesized strand of DNA only if it is not bound to SeqA. The importance of hemi-methylation is twofold. Firstly, OriC becomes inaccessible to DnaA, and secondly, DnaA binds better to fully methylated DNA than hemi-methylated DNA

replication fork

The replication fork is a structure that forms within the nucleus during DNA replication. It is created by helicases, which break the hydrogen bonds holding the two DNA strands together. The resulting structure has two branching "prongs", each one made up of a single strand of DNA, that are called the leading and lagging strands. DNA polymerase creates new partners for the two strands by adding nucleotides

Tuesday, July 14, 2009

HIV TEST

hiv test









HIV tests are used to detect the presence of the human immunodeficiency virus in serum, saliva, or urine. Such tests may detect HIV antibodies, antigens, or RNA.

Terminology
The window period is the time from infection until a test can detect any change. The average window period with HIV-1 antibody tests is 22 days for subtype B. Antigen testing cuts the window period to approximately 16 days and NAT (Nucleic Acid Testing) further reduces this period to 12 days.[1]
Performance of medical tests is often described in terms of:
• sensitivity: The percentage of the results that will be positive when HIV is present
• specificity: The percentage of the results that will be negative when HIV is not present.
All diagnostic tests have limitations, and sometimes their use may produce erroneous or questionable results.
• False positive results are when the test concludes HIV is present when, in fact, the person is not infected.
• False negative results are when the test concludes HIV is not present, when in fact the person is infected.
Nonspecific reactions, hypergammaglobulinemia, or the presence of antibodies directed to other infectious agents that may be antigenically similar to HIV can produce false positive results. Autoimmune diseases, such as systemic lupus erythematosus, can also cause false positive results.

Antibody tests
HIV antibody tests are specifically designed for routine diagnostic testing of adults; these tests are inexpensive and extremely accurate.

ELISA
The enzyme-linked immunosorbent assay (ELISA), or enzyme immunoassay (EIA), was the first screening test commonly employed for HIV. It has a high sensitivity.
In an ELISA test, a person's serum is diluted 400-fold and applied to a plate to which HIV antigens have been attached. If antibodies to HIV are present in the serum, they may bind to these HIV antigens. The plate is then washed to remove all other components of the serum. A specially prepared "secondary antibody" — an antibody that binds to human antibodies — is then applied to the plate, followed by another wash. This secondary antibody is chemically linked in advance to an enzyme. Thus the plate will contain enzyme in proportion to the amount of secondary antibody bound to the plate. A substrate for the enzyme is applied, and catalysis by the enzyme leads to a change in color or fluorescence. ELISA results are reported as a number; the most controversial aspect of this test is determining the "cut-off" point between a positive and negative result.

elisa test

elisa test








Enzyme-linked immunosorbent assay, also called ELISA, enzyme immunoassay or EIA, is a biochemical technique used mainly in immunology to detect the presence of an antibody or an antigen in a sample. The ELISA has been used as a diagnostic tool in medicine and plant pathology, as well as a quality control check in various industries. In simple terms, in ELISA an unknown amount of antigen is affixed to a surface, and then a specific antibody is washed over the surface so that it can bind to the antigen. This antibody is linked to an enzyme, and in the final step a substance is added that the enzyme can convert to some detectable signal. Thus in the case of fluorescence ELISA, when light of the appropriate wavelength is shone upon the sample, any antigen/antibody complexes will fluoresce so that the amount of antigen in the sample can be inferred through the magnitude of the fluorescence.
Performing an ELISA involves at least one antibody with specificity for a particular antigen. The sample with an unknown amount of antigen is immobilized on a solid support (usually a polystyrene microtiter plate) either non-specifically (via adsorption to the surface) or specifically (via capture by another antibody specific to the same antigen, in a "sandwich" ELISA). After the antigen is immobilized the detection antibody is added, forming a complex with the antigen. The detection antibody can be covalently linked to an enzyme, or can itself be detected by a secondary antibody which is linked to an enzyme through bioconjugation. Between each step the plate is typically washed with a mild detergent solution to remove any proteins or antibodies that are not specifically bound. After the final wash step the plate is developed by adding an enzymatic substrate to produce a visible signal, which indicates the quantity of antigen in the sample. Older ELISAs utilize chromogenic substrates, though newer assays employ fluorogenic substrates enabling much higher sensitivity

elisa test

elisa test





Testing plant samples for virus using the ELISA technique - photo tour

The first step is to add an antibody to a specific virus to the ELISA plates. Commercial ELISA test kits with antibodies for all major soybean viruses are available. We use antibodies for four common soybean viruses: alfalfa mosaic, soybean mosaic, bean leaf mottle and tobacco streak virus.

Next we prepare sap from plant samples to be tested for virus. We grind the plant tissue with a phosphate buffer. This is the same grinding procedure we use in the plant indicator test for viruses.The sap is loaded into the coated plates. If a specific virus is present, it will bind to the antibodies.

A positive control and a negative control is included in each ELISA plate.
The positive control is sap obtained from plants known to be infected with the target virus.
Sap from a healthy plant is used as a negative control.

A map is used to keep track of the location of each sample as we fill the plate. This is a good place to jot down important notes about the sample.

The plates are incubated overnight, and then emptied. A quick flip of the wrist empties the wells without contamination.

Another chemical is added that will react with the enzyme attached to antibody. The resulting color will give an indication if virus was present in the plant sap. In this case, the yellow wells indicate samples collected from plants infected with Bean pod mottle virus (BPMV).

An automatic plate reader is used to quantify ELISA results, based on the color reaction.

Values that are the sum of the mean plus 4 times the standard deviation of the negative values are considered positive (generally above 0.1)
Reference: Crowther, J.R. 1995. Methods in Molecular Biology: ELISA Theory and Practice. Totowa, NJ: Humana Press.

rpr test

rpr test






Rapid Plasma Reagin (RPR) refers to a type of test that looks for non-specific antibodies in the blood of the patient that may indicate that the organism (Treponema pallidum) that causes syphilis is present. The term "reagin" means that this test does not look for antibodies against the actual bacterium, but rather for antibodies against substances released by cells when they are damaged by T. pallidum. Another test often used to screen for syphilis is the Venereal Disease Research Laboratory VDRL slide test. However, the RPR test is generally preferred due to its ease of use.

In addition to screening for syphilis, an RPR level (also called a "titer") can be used to track the progress of the disease over time and its response to therapy.

The RPR test is an effective screening test, as it is very good at detecting people without symptoms who are affected by syphilis. However the test may suggest that people have syphilis who in reality do not (i.e., it may produce false positives). False positives can be seen in viral infections (Epstein-Barr, hepatitis, varicella, measles), lymphoma, tuberculosis, malaria, endocarditis, connective tissue disease, pregnancy, intravenous drug abuse, or contamination.[1] As a result, these two screening tests should always be followed up by a more specific treponemal test. Tests based on monoclonal antibodies and immunofluorescence, including Treponema pallidum hemagglutination assay (TPHA) and Fluorescent Treponemal Antibody Absorption (FTA-ABS) are more specific and more expensive. Unfortunately, false positives can still occur in related treponomal infections such as yaws and pinta. Tests based on enzyme-linked immunoassays are also used to confirm the results of simpler screening tests for syphilis.

Other types of tests are currently being evaluated as possible alternatives to, or as replacements for, the rapid plasma reagin test. One of these alternatives is an immunochromographic strip test. A study published in February 2006 found that this test outperformed the RPR test in values of sensitivity and specificity, and it does not require a laboratory to process the results.

Fluorescent Antibody absorption test is the most specific test for syphilis. If this is positive it confirms the diagnosis.

widal test

widal test




left one is possitive




The Widal test is a presumptive serological test for Enteric fever or Undulant fever. In case of Salmonella infections, it is a demonstration of agglutinating antibodies against antigens O-somatic and H-flagellar in the blood. For brucellosis, only O-somatic antigen is used. It's not a very accurate method, since patients are often exposed to other bacteria (e.g. Salmonella enteritidis, Salmonella typhimurium) in this species that induce cross-reactivity; many people have antibodies against these enteric pathogens, which also react with the antigens in the Widal test, causing a false-positive result. Test results need to be interpreted carefully in the light of past history of enteric fever, typhoid vaccination, general level of antibodies in the populations in endemic areas of the world. Typhidot is the other test used to ascertain the diagnosis of typhoid fever. As with all serological tests, the rise in antibody levels needed to make the diagnosis takes 7-14 days, which limits their use. Other means of diagnosing Salmonella typhi (and paratyphi) include cultures of blood, urine and faeces. The organism also produces H2S from thiosulfate.

Often 2-mercaptoethanol is added. This agent more easily denatures the IgM class of antibodies, so if a decrease in the titer is seen after using this agent, it means that the contribution of IgM has been removed leaving the IgG component. This differentiation of antibody classes is important; as it allows for the distinction of a recent (IgM) from an old infection (Ig E)

So we can define this test as " a test involving agglutination of typhoid bacilli when they are mixed with serum containing typhoid antibodies from an individual having typhoid fever; used to detect the presence of Salmonella typhi and S. paratyphi."

serolagical test

serolagical test







Serology is the scientific study of blood serum. In practice, the term usually refers to the diagnostic identification of antibodies in the serum. Such antibodies are typically formed in response to an infection (against a given microorganism) against other foreign proteins (in response, for example, to a mismatched blood transfusion), or to one's own proteins (in instances of autoimmune disease).
Serological tests may be performed for diagnostic purposes when an infection is suspected, in rheumatic illnesses, and in many other situations, such as checking an individual's blood type. Serology blood tests help to diagnose patients with certain immune deficiencies associated with the lack of antibodies, such as X-linked agammaglobulinemia. In such cases, tests for antibodies will be consistently negative.
There are several serology techniques that can be used depending on the antibodies being studied. These include: ELISA, agglutination, precipitation, complement-fixation, and fluorescent antibodies.
Some serological tests are not limited to blood serum, but can also be performed on other bodily fluids such as semen and saliva, which have (roughly) similar properties to serum.
Serological tests may also be used forensically, generally to link a perpetrator to a piece of evidence (e.g., linking a rapist to a semen sample).

Friday, July 10, 2009

Antibiotic sensitivity

Antibiotic sensitivity





Antibiotic sensitivity is a term used to describe the susceptibility of bacteria to antibiotics. Antibiotic susceptibility testing (AST) is usually carried out to determine which antibiotic will be most successful in treating a bacterial infection in vivo. Testing for antibiotic sensitivity is often done by the Kirby-Bauer method. Small wafers containing antibiotics are placed onto a plate upon which bacteria are growing. If the bacteria are sensitive to the antibiotic, a clear ring, or zone of inhibition, is seen around the wafer indicating poor growth. Other methods to test antimicrobial susceptibility include the Stokes method, E-test (also based on antibiotic diffusion). Agar and Broth dilution methods for Minimum Inhibitory Concentration determination.

Ideal antibiotic therapy is based on determination of the aetiological agent and its relevant antibiotic sensitivity. Empiric treatment is often started before laboratory microbiological reports are available when treatment should not be delayed due to the seriousness of the disease. The effectiveness of individual antibiotics varies with the location of the infection, the ability of the antibiotic to reach the site of infection, and the ability of the bacteria to resist or inactivate the antibiotic. Some antibiotics actually kill the bacteria (bactericidal), whereas others merely prevent the bacteria from multiplying (bacteriostatic) so that the host's immune system can overcome them.

identification of bacteria

identification of bacteria







The identification of bacteria is a careful and systematic process that uses many different techniques to narrow down the types of bacteria that are present in an unknown bacterial culture, such as the infected blood of someone dangerously ill with meningitis.


The techniques used at the earliest stages are relatively simple. An unknown sample may contain different bacteria, so a culture is made to grow individual bacterial colonies. Bacteria taken from each type of colony is then used to make a thin smear on a glass slide and this is examined using a light microscope. Viewing the bacteria shows if they are cocci or bacilli or one of the rarer forms, such as the corkscrew shaped spirochaetes.



Gram Staining
Cocci and bacilli can be either gram positive bacteria or gram negative bacteria, depending on the structure of their cell wall. The Gram Stain is named after Hans Christian Gram, a bacteriologist from Denmark who developed the technique in the 1880s. The test is performed on a thin smear of an individual bacterial colony that has been spread onto a glass slide. Gram positive bacteria retain an initial stain, crystal violet, even when the bacterial smear is rinsed with a mixture of acetone and ethanol. The solvent removes the dark blue colour from gram negative bacteria, dissolving away some of the thin cell wall. When a second stain, a pink dye called fuchsin is then added, gram positive bacteria are unaffected by this, as they are already stained dark blue, but the gram negative bacteria turn bright pink. The colour difference can be seen easily using a light microscope.

urine culture

Urine Culture


A urine culture is a test to find and identify germs (usually bacteria) that may be causing a urinary tract infection (UTI). Urine in the bladder normally is sterile-it does not contain any bacteria or other organisms (such as fungi). But bacteria can enter the urethra and cause an infection.
A urine sample is kept under conditions that allow bacteria and other organisms to grow. If few organisms grow, the test is negative. If organisms grow in numbers large enough to indicate an infection, the culture is positive. The type of organisms causing the infection are identified with a microscope or by chemical tests.
Urinary tract infections are more common in women and girls than in men. This may be partly because the female urethra is shorter and closer to the anus, which allows bacteria from the intestines to come into contact more easily with the urethra. Men also have an antibacterial substance in their prostate gland that reduces their risk.
If the urine culture is positive, other tests may be done to help choose which antibiotic will do the best job treating the infection. This is called sensitivity testing.

Thursday, July 9, 2009

Blood culture

Blood culture is microbiological culture of blood. It is employed to detect infections that are spreading through the bloodstream (bacteremia, septicemia).
Indications: · Core temperature out of normal range · Focal signs of infection · Tachycardia, hyper or hypotension or raised respiratory rate · Chills or rigors · Raised or very low WCC · New or worsening confusion
N.B. signs of sepsis may be minimal or absent in the very young and the elderly.
Purpose To establish diagnosis in suspected septicaemia, endocarditis, bacterial meningitis, pericarditis, septic arthritis, osteomyelitis, pyelonephritis or enteric fever.
To identify the causative organisms in severe pneumonia, postpartum fever, pelvic inflammatory disease, cannulae sepsis, neonatal epiglottitis and sepsis. Investigations of patients with pyrexia of unknown origin (PUO). However, negative growths do not exclude infection.
The test 2-3 specimens of 20 ml of blood (2-4ml for infants, depending on weight) collected from separate sites within an hour (unless ?sepsis, fungal infections, endocarditis or endovascular (catheter related), when they should be at least an hour apart), cultured in enriched broth for aerobes, anaerobes and yeasts. Ideally, the sample is collected at the pyrexial peak and prior to antibiotic therapy. Where antibiotics have been commenced the samples should be taken immediately before the next dose. Blood cultures should not be routine.
Incubation is for 5-7 days (although most pathogens will grow within 1-2 days) and often extended for 14-21 days for suspected bacterial endocarditis, Brucella or yeasts. Growth is detected by the presence of turbidity, haemolysis, Gram stain or more commonly production of carbon dioxide or change in pH (detected by an automated monitoring system).
Risks The usual risks of venepuncture and the occurrence of false positive results (3+%) leading to inappropriate treatment (Madeo et al, 2003).
Procedure 1. Assemble the equipment, check bottles for damage, check expiry date on bottles and wash hands as per policy 2. Check the patients identity 3. Explain, check for needle phobia and gain consent 4. Clean visibly soiled skin with soap and water 5. Check patient is comfortable 6. Clean the bottle tops using a separate alcohol/ chlorhexidine wipes, as below, and discard these wipes 7. Apply tourniquet (disposable in the Southern Area) and select vein. 8. Cleanse for 30+ seconds with 2% chlorhexidine gluconate in 70% isopropyl alcohol (e.g. Clinelle) and allow to dry passively (Mimoz et al,1999, Pratt et al 2007). If central line is being used disinfect access port with 2% chlorhexidine gluconate in 70% isopropyl alcohol 9. Glove with clean gloves. Do not palpate vein following cleaning. Sterile gloves are not required. 10. Collect blood culture sample first (prior to other bloods) using closed system (e.g. Bio-Merieux holder for BacT/ALERT blood culture and safety blood collection set + luer adapter safety butterfly). Collect the aerobic sample first (10mls) and then 10mls into the anaerobic . Babies should have a single yellow ‘pedibact’ bottle used (aerobic) and an anaerobic sample. Ensure the bottle is positioned below the puncture site to avoid reflux of the broth into the patient Do not take blood from existing peripheral cannulae or from immediately above cannulae sites. Do not use the femoral vein.
11. Rotate the blood cultures bottles to mix –do not shake 12. Do not change needles between vein and bottles (this risks contamination) 13. Apply dressing to site and apply white nail pressure for 2+ minutes 14. Dispose of sharps carefully as per local policy 15. Label sample noting time and site i.e. peripheral, central line etc. in both the bottle and form, record and transport to laboratory ASAP. Do not obscure the bar code on the bottle 16. Record the date, time and site of specimen collected in the patient’s notes. 17. Ensure any spillages are cleaned up as per local policy 18. Wash hands again
Avoid using needles and syringes for this procedure as they risk needle stick injuries, over or under fill of bottles and accidental contamination.
Other steps: · Training for all participating staff · Competency assessment
References:
Department of Health (2007) Saving lives: Reducing infection, delivering clean and safe care London: DoH
Donnino, M., Goyal, N., Terlecki, T., Donnino, K., Miller, J., Otero, R. and Howell, M. (2007) Inadequate blood volume collected for culture: A survey of health care professionals Mayo Clinic Proceedings 82(9) 1069-1072
Madeo, M. and Barlow, G. (2008) Reducing blood-culture contamination rates by the use of a 2% chlorhexadine solution applicator in acute admission units Journal of Hospital Infection 69, 207-309
Madeo, M, Davies, D., Owen, L., Wadsworth, P., Johnson, G. and Martin, C. (2003) Reduction in the contamination rate of blood cultures collected by medical staff in the accident and emergency department Clinical effectiveness in Nursing 7, 30-32.
Madeo, M., Jackson, T. and Williams, C. (2009) Simple measures to reduce the rate of contamination of blood cultures in accident and emergency Emergency Medicine Journal 22, 810-811.
Mimoz, O., Karim, A., Mercat, A. Cosseron, M., Falissard, B., Parker, F., Richard, C., Samii, K. and Nordmann, P. (1999) Chlorhexidine compared with providone-iodine a ski preparation before blood culture Annals of Internal Medicine131(11), 834-837
Pratt et al (2007) epic 2: National Evidence Guidelines for preventing healthcare associated infections in NHS hospitals in England Journal of Hospital Infection 65 (1) S14
Weinstein, M.P., Lee, A., Mirrett, S. and Barth Reller, L. (2007) Infections in adults: How many blood cultures are needed? Journal of Clinical Microbiology 45, 3546-3548.
sterilization

Sterilization (or sterilisation, see spelling differences) refers to any process that effectively kills or eliminates transmissible agents (such as fungi, bacteria, viruses, spore forms, etc.) from a surface, equipment, article of food or medication, or biological culture medium. Sterilization does not, however, remove prions. Sterilization can be achieved through application of heat, chemicals, irradiation, high pressure or filtration.
culter media preparation


Media Preparation
Microrganisms need nutrients, a source of energy and certain environmental conditions in order to grow and reproduce. In the environment, microbes have adapted to the habitats most suitable for their needs, in the laboratory, however, these requirements must be met by a culture medium. This is basically an aqueous solution to which all the necessary nutrients have been added. Depending on the type and combination of nutrients, different categories of media can be made.

Wednesday, July 8, 2009

STINING TECHNIQUES

STINING TECHNIQUES





Staining is an auxiliary technique used in microscopy to enhance contrast in the microscopic image.
In biochemistry it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. It is similar to fluorescent tagging.
Stains and dyes are frequently used in biology and medicine to highlight structures in biological tissues for viewing, often with the aid of different microscopes. Stains may be used to define and examine bulk tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells, for instance), or organelles within individual cells.
Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis.
Staining is not limited to biological materials, it can also be used to study the morphology of other materials for example the lamellar structures of semicrystalline polymers or the domain structures of block copolymers.

ziehl neelsen stain

ziehn neelsen stain



The Ziehl-Neelsen stain, also known as the acid-fast stain, was first described by two German doctors; Franz Ziehl (1859 to 1926), a bacteriologist and Friedrich Neelsen (1854 to 1894), a pathologist. It is a special bacteriological stain used to identify acid-fast organisms, mainly Mycobacteria. Mycobacterium tuberculosis is the most important of this group, as it is responsible for the disease called tuberculosis (TB). It is helpful in diagnosing Mycobacterium tuberculosis since its lipid rich cell wall makes it resistant to Gram stain. It can also be used to stain few other bacteria like Nocardia. The reagents used are Ziehl-Neelsen carbolfuchsin, acid alcohol and methylene blue.

Tuesday, July 7, 2009

CELL SYNCHRONY


Cell Synchronization is a process by which cells at different stages of the cell cycle in a culture are brought to the same phase.[1] "Cell synchrony" is required to study the progression of cells through the cell cycle. The types of synchronizations are broadly categorized into two groups: "Physical Fractionation" and "Chemical Blockade."

Cell separation by physical means
Physical fractionation or cell separation techniques, based on the following characteristics are in use.
Cell density
Cell size
Affinity of antibodies on cell surface epitopes.
Light scatter or fluorescent emission by labeled cells.
The two commonly used techniques are:

Centrifugal separation
The physical characteristics — cell size and sedimentation velocity — are operative in the technique of centrifugal elutriation. Centrifugal elutriator (from Beckman) is an advanced device for increasing the sedimentation rate so that the yield and resolution of cells is better. The cell separation is carried out in a specially designed centrifuge and rotor.

Fluorescence-activated cell sorting
Fluorescence-activated cell sorting (FACS) is a technique for sorting out the cells based on the differences that can be detected by light scatter (eg. cell size) or fluorescence emission (by penetrated DNA, RNA, proteins, antigens). The procedure involves passing of a single stream of cells through a laser beam so that the scattered light from the cells can be detected and recorded. There are two instruments in use based on its principle:
a) Flow cytometer
b) Fluorescence-activated cell sorter

Cell separation by chemical blockade
The cells can be separated by blocking metabolic reactions.[2] Two types of metabolic blockades are in use:

Inhibition of DNA synthesis
During the S phase of cell cycle, DNA synthesis can be inhibited by using inhibitors such as thymidine, aminopterin, hydroxyurea and cytosine arabinoside. The effects of these inhibitors are variable. The cell cycle is predominantly blocked in S phase that results in viable cells.

Nutritional deprivation
Elimination of serum from the culture medium for about 24 hours results in the accumulation of cells at G1 phase. This effect of nutritional deprivation can be restored by their addition by which time the cell synchrony occurs.

[edit] Optical microscopy
See also: Optical microscope
Optical or light microscopy involves passing visible light transmitted through or reflected from the sample through a single or multiple lenses to allow a magnified view of the sample.[1] The resulting image can be detected directly by the eye, imaged on a photographic plate or captured digitally. The single lens with its attachments, or the system of lenses and imaging equipment, along with the appropriate lighting equipment, sample stage and support, makes up the basic light microscope. The most recent development is the digital microscope which uses a CCD camera to focus on the exhibit of interest. The image is shown on a computer screen since the camera is attached to it via a USB port, so eye-pieces are unnecessary.

[edit] Limitations
Limitations of standard optical microscopy (bright field microscopy) lie in three areas;
The technique can only image dark or strongly refracting objects effectively.
Diffraction limits resolution to approximately 0.2 micrometre (see: microscope).
Out of focus light from points outside the focal plane reduces image clarity.
Live cells in particular generally lack sufficient contrast to be studied successfully, internal structures of the cell are colourless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
These limitations have all been overcome to some extent by specific microscopy techniques which can non-invasively increase the contrast of the image. In general, these techniques make use of differences in the refractive index of cell structures. It is comparable to looking through a glass window: you (bright field microscopy) don't see the glass but merely the dirt on the glass. There is however a difference as glass is a denser material, and this creates a difference in phase of the light passing through. The human eye is not sensitive to this difference in phase but clever optical solutions have been thought out to change this difference in phase into a difference in amplitude (light intensity).

[edit] Techniques
Main article: Optical microscope

[edit] Bright field
Main article: Bright field microscopy
Bright field microscopy is the simplest of all the light microscopy techniques. Sample illumination is via transmitted white light, i.e. illuminated from below and observed from above. Limitations include low contrast of most biological samples and low apparent resolution due to the blur of out of focus material. The simplicity of the technique and the minimal sample preparation required are significant advantages.

[edit] Oblique illumination
The use of oblique (from the side) illumination gives the image a 3-dimensional appearance and can highlight otherwise invisible features. A more recent technique based on this method is Hoffmann's modulation contrast, a system found on inverted microscopes for use in cell culture. Oblique illumination suffers from the same limitations as bright field microscopy (low contrast of many biological samples; low apparent resolution due to out of focus objects), but may highlight otherwise invisible structures.

[edit] Dark field
Main article: Dark field microscopy
Dark field microscopy is a technique for improving the contrast of unstained, transparent specimens.[2] Dark field illumination uses a carefully aligned light source to minimize the quantity of directly-transmitted (unscattered) light entering the image plane, collecting only the light scattered by the sample. Darkfield can dramatically improve image contrast—especially of transparent objects – while requiring little equipment setup or sample preparation. However, the technique does suffer from low light intensity in final image of many biological samples, and continues to be affected by low apparent resolution.
Rheinberg illumination is a special variant of dark field illumination in which transparent, colored filters are inserted just before the condenser so that light rays at high aperture are differently colored than those at low aperture (i.e. the background to the specimen may be blue while the object appears self-luminous yellow). Other color combinations are possible but their effectiveness is quite variable.[3]

[edit] Dispersion staining
Main article: Dispersion staining
Dispersion staining is an optical technique that results in a colored image of a colorless object. This is an optical staining technique and requires no stains or dyes to produce a color effect. There are five different microscope configurations used in the broader technique of dispersion staining. They include brightfield Becke` line, oblique, darkfield, phase contrast, and objective stop dispersion staining.

[edit] Phase contrast
Main articles: Phase contrast microscope and Phase contrast microscopy
In electron microscopy: Phase-contrast imaging
More sophisticated techniques will show proportional differences in optical density . Phase contrast is a widely used technique that shows differences in refractive index as difference in contrast. It was developed by the Dutch physicist Frits Zernike in the 1930s (for which he was awarded the Nobel Prize in 1953). The nucleus in a cell for example will show up darkly against the surrounding cytoplasm. Contrast is excellent; however it is not for use with thick objects. Frequently, a halo is formed even around small objects, which obscures detail. The system consists of a circular annulus in the condenser which produces a cone of light. This cone is superimposed on a similar sized ring within the phase-objective. Every objective has a different size ring, so for every objective another condenser setting has to be chosen. The ring in the objective has special optical properties: it first of all reduces the direct light in intensity, but more importantly, it creates an artificial phase difference of about a quarter wavelength. As the physical properties of this direct light have changed, interference with the diffracted light occurs, resulting in the phase contrast image.

[edit] Differential interference contrast
Main article: Differential interference contrast microscopy
Superior and much more expensive is the use of interference contrast. Differences in optical density will show up as differences in relief. A nucleus within a cell will actually show up as a globule in the most often used differential interference contrast system according to Georges Nomarski. However, it has to be kept in mind that this is an optical effect, and the relief does not necessarily resemble the true shape! Contrast is very good and the condenser aperture can be used fully open, thereby reducing the depth of field and maximizing resolution.
The system consists of a special prism (Nomarski prism, Wollaston prism) in the condenser that splits light in an ordinary and an extraordinary beam. The spatial difference between the two beams is minimal (less than the maximum resolution of the objective). After passage through the specimen, the beams are reunited by a similar prism in the objective.
In a homogeneous specimen, there is no difference between the two beams, and no contrast is being generated. However, near a refractive boundary (say a nucleus within the cytoplasm), the difference between the ordinary and the extraordinary beam will generate a relief in the image. Differential interference contrast requires a polarized light source to function; two polarizing filters have to be fitted in the light path, one below the condenser (the polarizer), and the other above the objective (the analyzer).
Note: In cases where the optical design of a microscope produces an appreciable lateral separation of the two beams we have the case of classical interference microscopy, which does not result in relief images, but can nevertheless be used for the quantitative determination of mass-thicknesses of microscopic objects.

[edit] Fluorescence
Main article: Fluorescence microscopy
When certain compounds are illuminated with high energy light, they then emit light of a different, lower frequency. This effect is known as fluorescence. Often specimens show their own characteristic autofluorescence image, based on their chemical makeup.
This method is of critical importance in the modern life sciences, as it can be extremely sensitive, allowing the detection of single molecules. Many different fluorescent dyes can be used to stain different structures or chemical compounds. One particularly powerful method is the combination of antibodies coupled to a fluorochrome as in immunostaining. Examples of commonly used fluorochromes are fluorescein or rhodamine. The antibodies can be made tailored specifically for a chemical compound. For example, one strategy often in use is the artificial production of proteins, based on the genetic code (DNA). These proteins can then be used to immunize rabbits, which then form antibodies which bind to the protein. The antibodies are then coupled chemically to a fluorochrome and then used to trace the proteins in the cells under study.
Highly-efficient fluorescent proteins such as the green fluorescent protein (GFP) have been developed using the molecular biology technique of gene fusion, a process which links the expression of the fluorescent compound to that of the target protein. Piston DW, Patterson GH, Lippincott-Schwartz J, Claxton NS, Davidson MW (2007). "Nikon MicroscopyU: Introduction to Fluorescent Proteins". Nikon MicroscopyU. http://www.microscopyu.com/articles/livecellimaging/fpintro.html. Retrieved on 2007-08-22. This combined fluorescent protein is generally non-toxic to the organism and rarely interferes with the function of the protein under study. Genetically modified cells or organisms directly express the fluorescently-tagged proteins, which enables the study of the function of the original protein in vivo.
Since fluorescence emission differs in wavelength (color) from the excitation light, a fluorescent image ideally only shows the structure of interest that was labeled with the fluorescent dye. This high specificity led to the widespread use of fluorescence light microscopy in biomedical research. Different fluorescent dyes can be used to stain different biological structures, which can then be detected simultaneously, while still being specific due to the individual color of the dye.
To block the excitation light from reaching the observer or the detector, filter sets of high quality are needed. These typically consist of an excitation filter selecting the range of excitation wavelengths, a dichroic mirror, and an emission filter blocking the excitation light. Most fluorescence microscopes are operated in the Epi-illumination mode (illumination and detection from one side of the sample) to further decrease the amount of excitation light entering the detector.
See also total internal reflection fluorescence microscope.

[edit] Confocal laser scanning
Main article: Confocal laser scanning microscopy
Confocal laser scanning (CLSM) generates the image by a completely different way than the normal visual bright field microscope. It gives slightly higher resolution, but most importantly it provides optical sectioning without disturbing out-of-focus light degrading the image. Therefore it provides sharper images of 3D objects. This is often used in conjunction with fluorescence microscopy.

[edit] Deconvolution
Fluorescence microscopy is extremely powerful due to its ability to show specifically labeled structures within a complex environment and also because of its inherent ability to provide three dimensional information of biological structures. Unfortunately this information is blurred by the fact that upon illumination all fluorescently labeled structures emit light no matter if they are in focus or not. This means that an image of a certain structure is always blurred by the contribution of light from structures which are out of focus. This phenomenon becomes apparent as a loss of contrast especially when using objectives with a high resolving power, typically oil immersion objectives with a high numerical aperture.
Fortunately though, this phenomenon is not caused by random processes such as light scattering but can be relatively well defined by the optical properties of the image formation in the microscope imaging system. If one considers a small fluorescent light source (essentially a bright spot), light coming from this spot spreads out the further out of focus one is. Under ideal conditions this produces a sort of "hourglass" shape of this point source in the third (axial) dimension. This shape is called the point spread function (PSF) of the microscope imaging system. Since any fluorescence image is made up of a large number of such small fluorescent light sources the image is said to be "convolved by the point spread function".
Knowing this point spread function means that it is possible to reverse this process to a certain extent by computer based methods commonly known as deconvolution microscopy.[4] There are various algorithms available for 2D or 3D deconvolution. They can be roughly classified in non restorative and restorative methods. While the non restorative methods can improve contrast by removing out of focus light from focal planes, only the restorative methods can actually reassign light to it proper place of origin. This can be an advantage over other types of 3D microscopy such as confocal microscopy, because light is not thrown away but reused. For 3D deconvolution one typically provides a series of images derived from different focal planes (called a Z-stack) plus the knowledge of the PSF which can be either derived experimentally or theoretically from knowing all contributing parameters of the microscope.

[edit] Sub-diffraction techniques
It is well known that there is a spatial limit to which light can focus: approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable. This section explores some approaches to imaging objects smaller than ~250 nm. Most of the following information was gathered (with permission) from a chemistry blog's review of sub-diffraction microscopy techniques Part I and Part II. For a review, see also reference [5].

[edit] Near-field scanning
Near-field scanning is also called NSOM. Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field.[6] But in the near-field, all of this is not necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers.[7] When the tip is brought to nanometers away from a molecule, the resolution is not limited by diffraction but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). An image can be built by a raster scan of the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the minuscule collection efficiency (if you are trying to collect fluorescence in the near-field). Other techniques such as ANSOM (see below) try to avoid this drawback.

[edit] Local enhancement / ANSOM / bowties
Instead of forcing photons down a tiny tip, some techniques create a local bright spot in an otherwise diffraction-limited spot. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees.[8] Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
Bowtie nanoantennas have been used to greatly and reproducibly enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.[9]

[edit] Stimulated emission depletion
Stefan Hell at the Max Planck Institute for Biophysical Chemistry - Goettingen (Germany) developed STED microscopy (stimulated emission depletion), which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depleting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.[10]
This technique also requires a raster scan like NSOM and standard confocal laser scanning microscopy.

[edit] Fitting the point-spread function
Fitting the point-spread function is also called PSF. The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but one can also use crafty analysis to increase the ability to know where a nanoscale object is located. The image of a point source on a charge-coupled device camera is called a point-spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. But it is possible to simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore. The precision by which this technique can locate the center depends on the number of photons collected (as well as the CCD pixel size and other factors).[11] Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers. This, of course, requires careful measurements and collecting many photons.

[edit] PALM, STORM
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to "resolution"—this term is here used loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and colleagues developed PALM;[12] Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy.[13] Sam Hess at University of Maine developed the technique simultaneously. The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochastic, only a few, well separated molecules "turn on." Then Gaussians are fit to their PSFs to high precision (see section above). After the few bright dots photobleach, another flash of the photoactivating light activates random fluorophores again and the PSFs are fit of these different well spaced objects. This process is repeated many times, building up an image molecule-by-molecule; and because the molecules were localized at different times, the "resolution" of the final image can be much higher than that limited by diffraction.
The major problem with these techniques is that to get these beautiful pictures, it takes on the order of hours to collect the data. This is certainly not the technique to study dynamics (fitting the PSF is better for that).

[edit] Structured illumination

Comparison of the resolution obtained by confocal laser scanning microscopy (top) and 3D structured illumination microscopy (3D-SIM-Microscopy, bottom). Shown are details of a nuclear envelope. Nuclear pores (anti-NPC) red, nuclear envelope (anti-Lamin) green, chromatin (DAPI-staining) blue. Scale bars: 1µm.
There is also the wide-field structured-illumination (SI) approach to breaking the diffraction limit of light.[14][15] SI—or patterned illumination—relies on both specific microscopy protocols and extensive software analysis post-exposure. But, because SI is a wide-field technique, it is usually able to capture images at a higher rate than confocal-based schemes like STED. (This is only a generalization, because SI isn't actually super fast. I'm sure someone could make STED fast and SI slow!) The main concept of SI is to illuminate a sample with patterned light and increase the resolution by measuring the fringes in the Moiré pattern (from the interference of the illumination pattern and the sample). "Otherwise-unobservable sample information can be deduced from the fringes and computationally restored."[16]
SI enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames with the illumination shifted by some phase, it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image.
But this only enhances the resolution by a factor of 2 (because the SI pattern cannot be focused to anything smaller than half the wavelength of the excitation light). To further increase the resolution, you can introduce nonlinearities, which show up as higher-order harmonics in the FT. In reference [16], Gustafsson uses saturation of the fluorescent sample as the nonlinear effect. A sinusoidal saturating excitation beam produces the distorted fluorescence intensity pattern in the emission. This nonpolynomial nonlinearity yields a series of higher-order harmonics in the FT.
Each higher-order harmonic in the FT allows another set of images that can be used to reconstruct a larger area in reciprocal space, and thus a higher resolution. In this case, Gustafsson achieves less than 50-nm resolving power, more than five times that of the microscope in its normal configuration.
The main problems with SI are that, in this incarnation, saturating excitation powers cause more photodamage and lower fluorophore photostability, and sample drift must be kept to below the resolving distance. The former limitation might be solved by using a different nonlinearity (such as stimulated emission depletion or reversible photoactivation, both of which are used in other sub-diffraction imaging schemes); the latter limits live-cell imaging and may require faster frame rates or the use of some fiduciary markers for drift subtraction. Nevertheless, SI is certainly a strong contender for further application in the field of super-resolution microscopy.

[edit] Localization Microscopy/Spatially Structured Illumination
Around 1995, Christoph Cremer commenced with the development of a light microscopic process, which achieved a substantially improved size resolution of cellular nanostructures stained with a fluorescent marker. This time he employed the principle of wide field microscopy combined with structured laser illumination (spatially modulated illumination, SMI[17]. Currently, a size resolution of 30 – 40 nm (approximately 1/16 – 1/13 of the wave length used) is being achieved. In addition, this technology is no longer subjected to the speed limitations of the focusing microscopy so that it becomes possible to undertake 3D analyses of whole cells within short observation times (at the moment around a few seconds). Also since around 1995, Christoph Cremer developed and realized new fluorescence based wide field microscopy approaches which had as their goal the improvement of the effective optical resolution (in terms of the smallest detectable distance between two localized objects) down to a fraction of the conventional resolution (spectral precision distance/position determination microscopy, SPDM). Combining SPDM and SMI, known as Vertico-SMI microscopy[18] Christoph Cremer can currently achieve a resolution of approx. 10 nm in 2D and 40 nm in 3D in wide field images of whole living cells[19]. Widefield 3D “nanoimages” of whole living cells currently still take about two minutes, but work to reduce this further is currently under way. Vertico-SMI is currently the fastest optical 3D nanoscope for the three dimensional structural analysis of whole cells world-wide.
Images of cell nuclei and mitotic stages recorded with 3D-SIM Microscopy.

Comparison confocal microscopy - 3D-SIM

Cell nucleus in prophase from various angles

Two mouse cell nuclei in prophase.

mouse cell in telophase

[edit] Extensions
Most modern instruments provide simple solutions for micro-photography and image recording electronically. However such capabilities are not always present and the more experienced microscopist will, in many cases, still prefer a hand drawn image rather than a photograph. This is because a microscopist with knowledge of the subject can accurately convert a three dimensional image into a precise two dimensional drawing . In a photograph or other image capture system however, only one thin plane is ever in good focus.
The creation of careful and accurate micrographs requires a microscopical technique using a monocular eyepiece. It is essential that both eyes are open and that the eye that is not observing down the microscope is instead concentrated on a sheet of paper on the bench besides the microscope. With practice, and without moving the head or eyes, it is possible to accurately record the observed details by tracing round the observed shapes by simultaneously "seeing" the pencil point in the microscopical image.
Practicing this technique also establishes good general microscopical technique. It is always less tiring to observe with the microscope focused so that the image is seen at infinity and with both eyes open at all times.

[edit] Other enhancements
Main article: stereomicroscope

[edit] X-ray
Main article: X-ray microscopy
As resolution depends on the wavelength of the light. Electron microscopy has been developed since the 1930s that use electron beams instead of light. Because of the much smaller wavelength of the electron beam, resolution is far higher.
Though less common, X-ray microscopy has also been developed since the late 1940s. The resolution of X-ray microscopy lies between that of light microscopy and the electron microscopy.

[edit] Electron microscopy
Main article: Electron microscope
For light microscopy the wavelength of the light limits the resolution to around 0.2 micrometers. In order to gain higher resolution, the use of an electron beam with a far smaller wavelength is used in electron microscopes.
Transmission electron microscopy (TEM) is principally quite similar to the compound light microscope, by sending an electron beam through a very thin slice of the specimen. The resolution limit in 2005 was around 0.05 nanometer and has not increased appreciably since that time.
Scanning electron microscopy (SEM) visualizes details on the surfaces of cells and particles and gives a very nice 3D view. It gives results much like the stereo light microscope and akin to that its most useful magnification is in the lower range than that of the transmission electron microscope.

[edit] Atomic de Broglie
Main article: Atomic de Broglie microscope
The atomic de Broglie microscope is an imaging system which is expected to provide resolution at the nanometer scale using neutral He atoms as probe particles. [20][21]. Such a device could provide the resolution at nanometer scale and be absolutely non-destructive, but it is not developed so well as optical microscope or an electron microscope.

[edit] Scanning probe microscopy
Main article: Scanning probe microscopy
This is a sub-diffraction technique. Examples of scanning probe microscopes are the atomic force microscope (AFM), the Scanning tunneling microscope and the photonic force microscope. All such methods imply a solid probe tip in the vicinity (near field) of an object, which is supposed to be almost flat.

[edit] Ultrasonic force
Ultrasonic Force Microscopy (UFM) has been developed in order to improve the details and image contrast on "flat" areas of interest where the AFM images are limited in contrast. The combination of AFM-UFM allows a near field acoustic microscopic image to be generated. The AFM tip is used to detect the ultrasonic waves and overcomes the limitation of wavelength that occurs in acoustic microscopy. By using the elastic changes under the AFM tip, an image of much greater detail than the AFM topography can be generated.
Ultrasonic force microscopy allows the local mapping of elasticity in atomic force microscopy by the application of ultrasonic vibration to the cantilever or sample. In an attempt to analyse the results of ultrasonic force microscopy in a quantitative fashion, a force-distance curve measurement is done with ultrasonic vibration applied to the cantilever base, and the results are compared with a model of the cantilever dynamics and tip-sample interaction based on the finite-difference technique.

[edit] Infrared microscopy
The term infrared microscope covers two main types of diffraction-limited microscopy. The first provides optical visualization plus IR spectroscopic data collection. The second (more recent and more advanced) technique employs focal plane array detection for infrared chemical imaging, where the image contrast is determined by the response of individual sample regions to particular IR wavelengths selected by the user.
IR versions of sub-diffraction microscopy (see above) exist also. These include IR NSOM [22] and photothermal microspectroscopy.

[edit] Digital holographic microscopy
In digital holographic microscopy (DHM), interfering wave-fronts from a coherent light-source are recorded on a sensor and the image digitally reconstructed by a computer. The image yielded provides a quantitative measurement of the optical thickness of the specimen. DHM can be used with many different optical set-ups. In reflecting DHM, the sensor is positioned on the same side of the specimen as the light source. In transmitting DHM, the sensor and the light source are positioned on opposite sides of the specimen.
One unique feature of DHM is the ability to adjust focus after the image is recorded, since all focus planes are recorded simultaneously by the hologram.

[edit] Digital Pathology (virtual microscopy)
Main article: Digital Pathology
Digital Pathology is an image-based information environment enabled by computer technology that allows for the management of information generated from a digital slide. Digital pathology is enabled in part by virtual microscopy, which is the practice of converting glass slides into digital slides that can be viewed, managed, and analyzed.

[edit] Amateur microscopy
Amateur Microscopy is the investigation and observation of biological and non-biological specimens for recreational purposes. Collectors of minerals, insects, seashells and plants may use microscopes as tools to uncover features that help them classify their collected items. Other amateurs may be interested in observing the life found in pond water and of other samples. Microscopes may also prove useful for the water quality assessment for people that keep a home aquarium. Photographic documentation and drawing of the microscopic images are additional tasks that augment the spectrum of tasks of the amateur. There are even competitions for photomicrograph art. Participants of this pastime may either use commercially prepared microscopic slides or may engage in the task of specimen preparation.
While microscopy is a central tool in the documentation of biological specimens, it is generally insufficient to justify the description of a new species based on microscopic investigations alone. Often genetic and biochemical tests are necessary to confirm the discovery of a new species. A laboratory and access to academic literature is a necessity, which is specialized and generally not available to amateurs. There is however one huge advantage that amateurs have above professionals: time to explore their surroundings. Often, advanced amateurs team up with professionals to validate their findings and (possibly) describe new species.
In the late 1800s amateur microscopy became a popular hobby in the United States and Europe. Several 'professional amateurs' were being paid for their sampling trips and microscopic explorations by philanthropists, to keep them amused on the Sunday afternoon (e.g. the diatom specialist A. Grunow, being paid by (among others) a Belgian industrialist). Professor John Phin published "Practical Hints on the Selection and Use of the Microscope (Second Edition, 1878)," and was also the editor of the “American Journal of Microscopy.”
In 1995, a loose group of amateur microscopists, drawn from several organizations in the UK and USA, founded a site for microscopy based on the knowledge and input of amateur (perhaps better referred to as 'enthusiast') microscopists. This was historically the first attempt to establish 'amateur' microscopy as a serious subject in the then emerging new media of the Internet. Today, it remains as a powerful established international resource for all ages, to input their findings and share information. It is a non-profit making web presence dedicated to the pursuit of science and understanding of the small-scale world: [1]