Monday, December 22, 2008

Sabayon :: Shorewall configuration

Shorewall configuration

Sunday, 21. December 2008, 14:33:11

,

How to install and configure shorewall in Ubuntu / Debian
What is Shorewall?

The Shoreline Firewall, more commonly known as "Shorewall", is a high-level
tool for configuring Netfilter. You describe your firewall/gateway
requirements using entries in a set of configuration files. Shorewall
reads those configuration files and with the help of the iptables
utility, Shorewall configures Netfilter to match your requirements.
Shorewall can be used on a dedicated firewall system, a multi-function
gateway/router/server or on a standalone GNU/Linux system. Shorewall
does not use Netfilter's ipchains compatibility mode and can thus take
advantage of Netfilter's connection state tracking capabilities.

Install Shorewall

# sudo apt-get install shorewall
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
openbsd-inetd
Use 'apt-get autoremove' to remove them.
Suggested packages:
shorewall-doc
The following NEW packages will be installed:
shorewall
0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded.
Need to get 0B/250kB of archives.
After unpacking 1241kB of additional disk space will be used.
Preconfiguring packages ...
Selecting previously deselected package shorewall.
(Reading database ... 16390 files and directories currently installed.)
Unpacking shorewall (from .../shorewall_3.2.6-2_all.deb) ...
Setting up shorewall (3.2.6-2) ...
#### WARNING ####
The firewall won't be started/stopped unless it is configured
Please read about Debian specific customization in
/usr/share/doc/shorewall/README.Debian.gz.
#################
Configure Shorewall Startup Service

# pico /etc/default/shorewall

#Now simply change the line below from 0 to 1

startup = 0
to
startup = 1

#save, and exit.

Shorewall configuration files are stored in two separate places
/etc/shorewall stores all the program configuration files.
/usr/share/shorewall stores supporting files and action files.

Configuring Shorewall

First you must define which configuration that fit with your network plan.
There are 3 types of examples configuration: One interface, Two
interface, and Three Interface. For more detail you may refer to http://www.shorewall.net/two-interface.htm.
Or we can use a default configuration sample. We need to copy all samples
configuration file from /usr/share/doc/shorewall/default-config to
/etc/shorewall

# cp /usr/share/doc/shorewall/default-config/* /etc/shorewall/


Now you have configuration files located at /etc/shorewall

Zones Configuration

First edit the zones file to specify the different network zones, these are
just labels that you will use in the other files. Consider the Internet
as one zone, and a private network as another zone. If you have this
then the zones file would look like this:

# pico /etc/shorewall/zones

# add 2 lines below into your zones file

net ipv4
loc ipv4

#save and exit

Interfaces Configuration

The
next file to edit is the interfaces file to specify the interfaces on
your machine. Here you will connect the zones that you defined in the
previous step with an actual interface. The third field is the
broadcast address for the network attached to the interface ("detect"
will figure this out for you). Finally the last fields are options for
the interface. The options listed below are a good starting point,

# pico /etc/shorewall/interfaces

# add 2 lines below into interfaces file

net wlan0 detect dhcp,tcpflags,routefilter,nosmurfs,logmartians
loc eth1 detect tcpflags,nosmurfs

#save and exit


Policy Configuration

The
next file defines your firewall default policy. The default policy is
used if no other rules apply. Often you will set the default policy to
REJECT or DROP as the default, and then configure specifically what
ports/services are allowed in the next step, and any that you do not
configure are by default rejected or dropped according to this policy.
An example policy (based on the zones and interfaces we used above)
would be:

# pico /etc/shorewall/policy

# sample from my shorewall policy configuration
loc net DROP info
loc $FW DROP info
loc all DROP info
$FW net ACCEPT info
$FW loc DROP info
$FW all DROP info
net $FW DROP info
net loc DROP info
net all DROP info
all all DROP info

# save and exit

This policy says: by default accept any traffic originating from the machine
(fw) to the internet and to the local network. Anything that comes in
from the internet destined to either the machine or the local network
should be dropped and logged to the syslog level "info". The last line
closes everything else off, and probably wont ever be touched. Note:
DROP rules are dropped quietly, and REJECTs send something back letting
the originator know they've been rejected.

Rules Configuration

The most important file is the rules. This is where you set what is allowed
or not. Any new connection that comes into your firewall passes over
these rules, if none of these apply, then the default policy will
apply. Note: This is only for new connections, existing connections are
automatically accepted. The comments in the file give you a good idea
of how things work, but the following will provided an example that can
give you a head-start:

# /$nano /etc/shorewall/rules

# add few lines below into rules file
DNS/ACCEPT $FW net
SSH/ACCEPT loc $FW
Ping/ACCEPT loc $FW
Ping/ACCEPT loc net
Ping/ACCEPT net $FW
ACCEPT $FW loc icmp
ACCEPT $FW net icmp

#WEB SERVICE PORT
ACCEPT loc net tcp 80
ACCEPT loc net tcp 443
ACCEPT loc $FW tcp 10000

# save and exit

This example can be written in long-hand as, "Accept any pings (icmp) from
the internet to the machine, accept any tcp connections from the
internet that are on any of the ports referenced in /etc/services for
the services ssh(22),www(80),https(443),webmin(10000), etc.

Final step is start shorewall firewall

# sudo /etc/init.d/shorewall start
password :

If there was a syntax error in your configuration you will get an error
saying so and you should have a read of /var/log/shorewall-init.log to
figure out why.

If everything does start up, you should make
sure that you aren't blocking something that you don't mean to, you can
do that by looking at your firewall logs.

Powered by ScribeFire.

Powered by ScribeFire.

Friday, November 28, 2008

Sending email via php

<?php

$to = "email@address.com";

$subject = "Our Subject";

$message = "The message to send";

$header = "From: you@email.com";
mail($to, $subject, $message, $header);

?>

Powered by ScribeFire.

Saturday, November 08, 2008

Remembrance :: Vinnie Moore + The Journey

By accident, I found my Vinnie Moore Mind's Eye cd. A lot of images came to my
mind. The first thing I did was to play was The Journey which is my favorite.
This song has everything: brilliant solos, "feeling", pure artistry and execution
from Moore. He is still my favorite guitar player...hands down!

You are THE MAN!, Vinnie.

-2501

http://www.youtube.com/watch?v=as9AwipvGC4

Powered by ScribeFire.

Tuesday, September 30, 2008

Scientists Say They’ve Found a Code Beyond Genetics in DNA

from: chrome://performancing/content/scribefire.xul

Researchers believe they have found a second code in DNA in addition to the genetic code.

The genetic code specifies all the proteins that a cell makes. The
second code, superimposed on the first, sets the placement of the
nucleosomes, miniature protein spools around which the DNA is looped.
The spools both protect and control access to the DNA itself.

The
discovery, if confirmed, could open new insights into the higher order
control of the genes, like the critical but still mysterious process by
which each type of human cell is allowed to activate the genes it needs
but cannot access the genes used by other types of cell.

The new
code is described in the current issue of Nature by Eran Segal of the
Weizmann Institute in Israel and Jonathan Widom of Northwestern University in Illinois and their colleagues.

There
are about 30 million nucleosomes in each human cell. So many are needed
because the DNA strand wraps around each one only 1.65 times, in a
twist containing 147 of its units, and the DNA molecule in a single
chromosome can be up to 225 million units in length.

Biologists
have suspected for years that some positions on the DNA, notably those
where it bends most easily, might be more favorable for nucleosomes
than others, but no overall pattern was apparent. Drs. Segal and Widom
analyzed the sequence at some 200 sites in the yeast genome where
nucleosomes are known to bind, and discovered that there is indeed a
hidden pattern.

Knowing the pattern, they were able to predict the placement of about 50 percent of the nucleosomes in other organisms.

The
pattern is a combination of sequences that makes it easier for the DNA
to bend itself and wrap tightly around a nucleosome. But the pattern
requires only some of the sequences to be present in each nucleosome
binding site, so it is not obvious. The looseness of its requirements
is presumably the reason it does not conflict with the genetic code,
which also has a little bit of redundancy or wiggle room built into it.

Having
the sequence of units in DNA determine the placement of nucleosomes
would explain a puzzling feature of transcription factors, the proteins
that activate genes. The transcription factors recognize short
sequences of DNA, about six to eight units in length, which lie just in
front of the gene to be transcribed.

But these short sequences
occur so often in the DNA that the transcription factors, it seemed,
must often bind to the wrong ones. Dr. Segal, a computational
biologist, believes that the wrong sites are in fact inaccessible
because they lie in the part of the DNA wrapped around a nucleosome.
The transcription factors can only see sites in the naked DNA that lies
between two nucleosomes.

The nucleosomes frequently move around,
letting the DNA float free when a gene has to be transcribed. Given
this constant flux, Dr. Segal said he was surprised they could predict
as many as half of the preferred nucleosome positions. But having
broken the code, “We think that for the first time we have a real
quantitative handle” on exploring how the nucleosomes and other
proteins interact to control the DNA, he said.

The other 50
percent of the positions may be determined by competition between the
nucleosomes and other proteins, Dr. Segal suggested.

Several
experts said the new result was plausible because it generalized the
longstanding idea that DNA is more bendable at certain sequences, which
should therefore favor nucleosome positioning.

“I think it’s really interesting,” said Bradley Bernstein, a biologist at Massachusetts General Hospital.


Jerry Workman of the Stowers Institute in Kansas City said the
detection of the nucleosome code was “a profound insight if true,”
because it would explain many aspects of how the DNA is controlled.

The
nucleosome is made up of proteins known as histones, which are among
the most highly conserved in evolution, meaning that they change very
little from one species to another. A histone of peas and cows differs
in just 2 of its 102 amino acid units. The conservation is usually
attributed to the precise fit required between the histones and the DNA
wound around them. But another reason, Dr. Segal suggested, could be
that any change would interfere with the nucleosomes’ ability to find
their assigned positions on the DNA.

In the genetic code, sets of
three DNA units specify various kinds of amino acid, the units of
proteins. A curious feature of the code is that it is redundant,
meaning that a given amino acid can be defined by any of several
different triplets. Biologists have long speculated that the redundancy
may have been designed so as to coexist with some other kind of code,
and this, Dr. Segal said, could be the nucleosome code.




Powered by ScribeFire.

Computer Made from DNA and Enzymes

from: http://news.nationalgeographic.com/news/2003/02/0224_030224_DNAcomputer.html

Stefan Lovgren
for National Geographic News

February 24, 2003




Israeli scientists have devised a computer that can perform 330
trillion operations per second, more than 100,000 times the speed of
the fastest PC. The secret: It runs on DNA.

A year ago, researchers from the Weizmann Institute of Science
in Rehovot, Israel, unveiled a programmable molecular computing machine
composed of enzymes and DNA molecules instead of silicon microchips.
Now the team has gone one step further. In the new device, the single
DNA molecule that provides the computer with the input data also
provides all the necessary fuel.


The design is considered a giant step in DNA computing. The Guinness
World Records last week recognized the computer as "the smallest
biological computing device" ever constructed. DNA computing is in its
infancy, and its implications are only beginning to be explored. But it
could transform the future of computers, especially in pharmaceutical
and biomedical applications.




Following Mother Nature's Lead

Biochemical "nanocomputers" already exist in nature; they are
manifest in all living things. But they're largely uncontrollable by
humans. We cannot, for example, program a tree to calculate the digits
of pi. The idea of using DNA to store and process information took off
in 1994 when a California scientist first used DNA in a test tube to
solve a simple mathematical problem.


Since then, several research groups have proposed designs for DNA
computers, but those attempts have relied on an energetic molecule
called ATP for fuel. "This re-designed device uses its DNA input as its
source of fuel," said Ehud Shapiro, who led the Israeli research team.


Think of DNA as software, and enzymes as hardware. Put them together in
a test tube. The way in which these molecules undergo chemical
reactions with each other allows simple operations to be performed as a
byproduct of the reactions. The scientists tell the devices what to do
by controlling the composition of the DNA software molecules. It's a
completely different approach to pushing electrons around a dry circuit
in a conventional computer.

To the naked eye, the DNA computer looks like clear water
solution in a test tube. There is no mechanical device. A trillion
bio-molecular devices could fit into a single drop of water. Instead of
showing up on a computer screen, results are analyzed using a technique
that allows scientists to see the length of the DNA output molecule.

"Once the input, software, and hardware molecules are mixed in
a solution it operates to completion without intervention," said David
Hawksett, the science judge at Guinness World Records. "If you want to
present the output to the naked eye, human manipulation is needed."



Don't Run to the PC Store Just Yet

As of now, the DNA computer can only perform rudimentary
functions, and it has no practical applications. "Our computer is
programmable, but it's not universal," said Shapiro. "There are
computing tasks it inherently can't do."


The device can check whether a list of zeros and ones has an even
number of ones. The computer cannot count how many ones are in a list,
since it has a finite memory and the number of ones might exceed its
memory size. Also, it can only answer yes or no to a question. It
can't, for example, correct a misspelled word.

In terms of speed and size, however, DNA computers
surpass conventional computers. While scientists say silicon chips
cannot be scaled down much further, the DNA molecule found in the
nucleus of all cells can hold more information in a cubic centimeter
than a trillion music CDs. A spoonful of Shapiro's "computer soup"
contains 15,000 trillion computers. And its energy-efficiency is more
than a million times that of a PC.
While a desktop PC is designed to perform one calculation very fast,
DNA strands produce billions of potential answers simultaneously. This
makes the DNA computer suitable for solving "fuzzy logic" problems that
have many possible solutions rather than the either/or logic of binary
computers. In the future, some speculate, there may be hybrid machines
that use traditional silicon for normal processing tasks but have DNA
co-processors that can take over specific tasks they would be more
suitable for.



Doctors in a Cell


Perhaps most importantly, DNA computing devices could revolutionize the
pharmaceutical and biomedical fields. Some scientists predict a future
where our bodies are patrolled by tiny DNA computers that monitor our
well-being and release the right drugs to repair damaged or unhealthy
tissue.


"Autonomous bio-molecular computers may be able to work as 'doctors in
a cell,' operating inside living cells and sensing anomalies in the
host," said Shapiro. "Consulting their programmed medical knowledge,
the computers could respond to anomalies by synthesizing and releasing
drugs."

DNA computing research is going so fast that its potential is
still emerging. "This is an area of research that leaves the science
fiction writers struggling to keep up," said Hawksett from the Guinness
World Records.

A summary of the research conducted by scientists at the
Weitzmann Institute of Science is published in today's online edition
of the Proceedings of the National Academy of Sciences.




Powered by ScribeFire.

DNA Molecules Display Telepathy-like Quality

http://www.livescience.com/health/080124-dna-telepathy.html

DNA molecules can display what almost seems like telepathy, research now reveals.

Double helixes of DNA can recognize matching molecules from a distance
and then gather together, all seemingly without help from any othermolecules,
scientists find. Previously, under the classic understandingof DNA, scientists had
no reason to suspect that double helixes of the
molecule could sort themselves by type, let alone seek each other out. The spiraling structure of DNA includes strings of molecules called bases.

Each of its four bases,commonly known by the letters A, T, C and G, is chemically
attracted toa specific partner — A likes binding to T, and C to G. The scheme binds
paired strands of DNA into the double helix the molecule is famous for.

Scientists investigated double-stranded DNA tagged with fluorescent
compounds. These molecules were placed in saltwater that contained no
proteins or other material that could interfere with the experiment or
help the DNA molecules communicate.

Curiously, DNA with identical sequences of bases were roughly twice
as likely to gather together as DNA molecules with different sequences.

The known interactions that draw the bases together are not the factor
bringing these double helixes close. Double helixes of DNA keep their
bases on their insides. On their outsides, they have highly
electrically charged chains of sugars and phosphates, which obscure the
forces that pull bases together. Although it looks as if spooky action or telepathic recognition is
going on, DNA operates under the laws of physics, not the supernatural.

To understand what researchers conjecture is really happening, think of
double helixes of DNA as corkscrews. The bases that make up a strand of
DNA each cause the corkscrew to bend one way or the other.
Double-stranded DNA with identical sequences each result in corkscrews
"whose ridges and grooves match up," said researcher Sergey Leikin, a
physical biochemist at the National Institute of Child Health and Human
Development in Bethesda, Md.

The electrically charged chains of sugars and phosphates of double
helixes of DNA cause the molecules to repel each other. However,
identical DNA double helixes have matching curves, meaning they repel
each other the least, Leikin explained. The scientists conjecture such
"telepathy" might help DNA molecules line up properly before they get
shuffled around. This could help avoiderrors in how DNA combines,
errors that underpin cancer, aging and other health problems.

Also, the proper shuffling of DNA is essential to sexual reproduction,
as it helps ensure genetic diversity among offspring, Leikin added.

Powered by ScribeFire.

First artificial DNA a step towards biological computers

from: http://arstechnica.com/news.ars/post/20080708-first-artificial-dna-a-step-towards-biological-computers.html

It has been over 50 years since the discovery of the double-stranded
nature of DNA, and over that half-century and more we have learned a
lot about deoxyribonucleic acid, from the fact that it organizes into a
double-stranded double helix all the way to having sequenced the entire
DNA of humans and a range of other organisms. Now, according to a paper
published in the Journal of the American Chemical Society, a team in Japan has created the world's first DNA strand made from artificial bases.


As information storage systems go, DNA is not bad. Just four different
bases (adenine, thymine, guanine, and cytosine) are all that's needed
to code for 20 different amino acids, using three base codons (e.g.
AUG). In fact, the four-base, triplet codon system has the potential to
be able to store information for more than just 20 amino acids; there
are 64 potential combinations, so several amino acids have multiple
codons, along with three stop codons that tell the cellular machinery
involved that the sequence is done.


Along the way, people have looked at DNA and thought that it ought
to be possible to use DNA to store nonbiological data. Better still, it
can pack that information into far smaller packages than is possible
with solid state memory or even the densest hard drive platters. There
have also been experiments that use DNA sequences to perform parallel
processing, as we covered last year.




But we needn't be limited to the four complementary bases, and that's
just what has been shown by a Japanese team, who have published details
of their creation of an artificial DNA strand. All the components of
their DNA product are nonnatural, yet they spontaneously form
right-handed duplexes with the corresponding opposite base, and these
bonds have very similar properties to those of natural DNA.


The hope is that this artificial DNA could have a range of
applications in the real world, from the aforementioned DNA computing
proposals, along with using DNA to store data, to using it in nanotech
settings. Artificial DNA has similar physical properties to
common-or-garden DNA without being degraded by enzymes such as DNase
(which is found everywhere), a factor that would make it quite useful
for any kind of biomedical setting.








Powered by ScribeFire.

Monday, September 29, 2008

2501 DNA-chip ready

2501 DNA-chip is ready for testing.

-Alpha stage is scheduled for 1TZx3.
-Fortran compiler has been optimized to analyze algorithms.
-Puppet Master to monitor activity from remote location.


Powered by ScribeFire.

ATP and Whole-Body Vitrification System -- Alpha Tests

We held our alpha test of the new ATP and the whole-body vitrification system this month using a swine as test subject. Given that this was our first large animal operation in many years,

we had something of a learning curve with regard to animal handling and
the specific surgical procedures necessary for performing bypass. We
chose to cannulate the carotid artery and internal jugular vein for the
procedure. I performed the cannulation and Regina Pancake assisted, and
the surgery went quite smoothly. We had the animal on bypass in 45
minutes, which our observing veterinarian considered quite successful.
We began our equipment testing with the new transport perfusion system.



We needed a mere five minutes to prepare and prime the system prior
to cannulation, but this figure was artificially high because the two
people preparing the system had to refresh their memories about how to
hang the perfusate bag. A time of less than two minutes to prepare the
system is the benchmark for our next test. All of the new elements
worked well, and we had no problems at all with the new ATP. We did not
test it fully on a closed circuit, only for open flush of the swine, in
order to start testing the whole body system.



Our whole body system consists of two parts that we tested: the
patient enclosure and the computer-controlled perfusion. The patient
enclosure involves an operating stage that cools the patient using
liquid nitrogen injected into a plenum underneath the patient, fans to
circulate nitrogen around the patient, a transparent – but internally
lighted – cover for the patient, and enough seals to keep the nitrogen
– both vapor and liquid – precisely where they should be.



The cooling stage cooled quite rapidly to the set temperature. We
added controllers for that only recently, because we were still
modifying the enclosure based on previous test results. The temperature
controllers need to be adjusted slightly by modifying how the cooling
curve is handled, but it took less than ten minutes to cool the stage
to three degrees C. We were quite pleased with the even nature of the
temperature, and Randal Fry is to be commended for his efforts to
adjust the nitrogen spray to accomplish this result. The table itself
is also at a more comfortable height for performing surgeries.



The perfusion system itself was the biggest unknown. Of course, the
programmer knew precisely how the system would respond to our tests;
because it was doing everything he told it to do. The calibrations of
the system went well, as did the system initialization. Our
cryoprotectant ramp control handled itself very well. Pressure control
did not go well, and this was because we had been using the pressure
control in a way that worked with an unloaded system (there was no body
in the loop). This made a big difference, and we will be adjusting that
portion of the program accordingly.



Our alarm functionality worked quite well. A clamp on a line that
causes the pressure to spike resulted in immediate shut off the main
pump. Level indicators worked well, and all the pumps in the system
responded appropriately. Both manual and automatic control of all
parameters functioned as intended. We have some minor tweaks to make to
the user interface, but those are primarily cosmetic.



Elements that were not quite ready for the test included the full
reporting functionality, though the pure data collection elements are
all working well; the bubble alarms are not installed; and neither is
the emergency stop button we intend to place on the patient enclosure
(in case the surgeon sees something requiring immediate cessation of
perfusion). We will be performing additional tests on the system’s
memory requirements, to ensure that we will not have any problems
during a long case. Once we tested the perfusion system to that extent,
we tested the final element of the patient enclosure: the ability of
the system to perform first-stage cooling.



This is the step where we plunge the patient’s temperature to just
above the glass transition point for M22, -110 degrees C. The table
itself cooled to -110 in eleven minutes, though of course, it took
longer for the swine to reach that stage. Using an animal that was not
vitrified caused the temperature to be reduced more slowly because of
the heat requirements for the ice formation, but the swine passed the
freezing point in 3.5 hours. We considered this acceptable under the
test conditions. The swine’s temperature continued to drop until it
reached -95 degrees C, at which point we discontinued the test. That
drop took approximately 18 hours. This time is good, given that not all
elements of the system worked exactly as intended, and we expect faster
times as adjustments are made. We did find it took a considerable
amount of nitrogen to reach that stage, but part of this is because our
environmental fans failed. We will be looking into different fans for
the next test and other improvements to reduce nitrogen consumption.



Overall, everyone was quite pleased with the results, and we expect
to make the necessary modifications quickly and are planning our second
test for later this week.



This work was done under the supervision of the Alcor Institutional
Animal Care and Use Committee under Alcor’s USDA registration as an
animal research facility, and was fully compliant with the requirements
and standards of the Animal Welfare Act. The animals used in these
tests were procured from a USDA-registered laboratory animal breeder.



I would like to thank the team who participated in this equipment
test, including: Dr. Craig Woods, Joel Anderson, Stephen Van Sickle,
Hugh Hixon, Randal Fry and Regina Pancake. We would also like to thank
all of the donors who made pursuing this project possible.

Financial crisis

We are up to a point where our country is not producing anything and we all found out that Wall Street is a fraud. Washington is all about corruption and greed....Can we trust our leaders??? Time will tell.

-2501

BeOS/Haiku OS : When?

I am still waiting for the first v1 of Haiku. This is the only OS that would make me switch from Linux to a different OS. I had BeOS R5 and it was aweseome. I miss it.

-2501

Wednesday, April 30, 2008

Common linux commands -- Slackzen

Common Linux Commands
less

cat

more

vi

cd

ls

ln {oldfile} {newfile} (create a file link)

mv [options] source dest (Remove and rename files or directories)

-b (backup files that are about to be overwritten)

-i (interactive mode)

mkdir [directoryName]

rmdir [directoryName]

rm -rf [directoryName] //force delete recursively without prompting

tail -1000f access_log //use Ctrl+C to break it out.

touch

cp [options] source dest

-b, -i (same as above)

-p (preserve the original file's ownership, group, permissions and timestamp (the meta info)

chmod{u|g|o|a} {+|-} {r|w|x} {filename}

chown

find managmentconsole/ -name "MCExcep*"

find . -name CVS -exec ls Tag {} \;

grep [options] pattern filenames

-i (case sensitive)

-n (show the line# along with the matched line)

-v (invert match, find all lines that do NOT match)

-w (match entire words)

grep -R "a string" /etc/*

pwd //Display the present working directory

which //Display a program's executable path

whoami

whereis //Locates binaries and manual pages for the ls command.

---------------------------------------------------------

System Command

---------------------------------------------------------

date //check timestamp

df //Available free disk space

ps

pstree

kill

man (View manual pages)

mail (Read your email)

mount

umount

printtool (Use to set up printer)

top (memory usage profile)

su/changeme (do things as superuser), exit (back to original user)

passwd

nohup ./start.sh & //start a program remotely in the background


---------------------------------------------------------

Establish Cron job

---------------------------------------------------------

crontab -e Edit your crontab file, or create one if it doesn't already exist.

crontab -l Display your crontab file.

crontab -r Remove your crontab file.

crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)

* * * * * command to be executed

- - - - -

| | | | |

| | | | +----- day of week (0 - 6) (Sunday=0)

| | | +------- month (1 - 12)

| | +--------- day of month (1 - 31)

| +----------- hour (0 - 23)

+------------- min (0 - 59)


------------------------------------------------------

Process handling

------------------------------------------------------

CTRL+Z (suspend the job -- stop running, instead of '&' that job is still running in background)

&gt;&gt; [1]+ Stopped ls -R | more (+ mean the lastest job that got suspended, and you can suspend &gt;1)

fg (it will bring back the latest job got suspended)

fg # (selectively bring back a job)

fg %find (bring back a process that start with "find" string - find command process)

fg %?usr (bring back a process that contains 'usr' in its command path)

bg %[number] (send the job 1 to background as &)

bg %[string]

kill %[number]

kill %[string]

jobs (to list out all the suspended jobs)

[1] Stopped ls -R /usr &gt;&gt; output

[2]+ Stopped find / -print &gt; output.find

[3] Stopped ls -R /var &gt;&gt; output

[4]- Stopped ls -R &gt;&gt; output

[5] Running ls -R /var &gt;&gt; output &
Job 2 now has + sign following it (latest job that suspended)

Job 4 has the minus sign (2nd last job suspended)

Job 5 is running in the background still (notice & sign)


------------------------------------------------------

Compression/ Decompression of files

------------------------------------------------------

Compressed files use less disk space and download faster than large,
uncompressed files. You can compress Linux files with the open-source
compression tool Gzip or with Zip (for file exchange with non-Linux user), which is recognized by most operating systems. You use Tar to achive files into single tar but it doesn’t do compression automatically.
gzip filename.ext (compress a file and saved as filename.ext.gz)

gzip filename.gz file1 file2 mydir/ (compress to a gz file)

gunzip filename.ext.gz (expand the compressed file)

gzip -d xxx.tar.gz (Uncompress the .gz file)

zip -r filename.zip files (compress a file with zip)

unzip filename.zip (expand the compressed zip file)

unzip -l xxx.zip (list all the file out)

zip -d br_MgmtConsole3Dev.zip \*\target\* (delete everything with target in the path inside the zip)

tar -cvf xxx.tar [files or directories] (create tar)

tar -czvf xxx.tgz mydir/ (create tgz = tar.gz)

tar -xvf xxx.tar (untar)

tar -xzvf xxx.tgz (untar tgz)

tar -tvf xxx.tar (to list the contents of a tar file)

jar cf abc.jar com com2 ... (jar the directory recursively with space separated to the abc.jar)

jar xf abc.jar (Extract)

jar -tvf xxx.war (examine what is inside it)

jar cf cocoon-2.1.5.1.war webapp (create a war file from the webapp directory)

jar tvf weblogic.jar |grep bcel (do the search on the output)


------------------------------------------------------

Networking

------------------------------------------------------

Here is good linux networking guide for your reference.

Host file location: /etc/hostsifconfig (check your box's IP address)

ping (send ICMP ECHO_REQUEST packet to the specific host. If the host is alive, an ICMP packet will be sent back)

traceroute www.justproposed.com (It displays each host that a packet travels through)

host www.justproposed.com (return IP of JP)

nslookup 64.102.102.32 (return DNS name of the IP)

netstat (port scan in the localbox)

finger

telnet &lt;hostname&gt; &lt;port&gt; (quick test to see if a remote service is up or not)

dig @192.168.1.254 www.slackware.com mx (“@192.168.1.254” specifies the
dns server to use. “www.slackware.com” is the domain name I am
performing a lookup on, and “mx” is the type of lookup I am performing.
The above query tells me that e-mail to www.slackware.com will instead
be sent to mail.slackware.com for delivery.)

ssh

sftp hostname (then you can do "put filename")

scp filename remote_host:directory (Copy a file from current directory to remote directory /home/honr). eg. scp filename1 honr@appden11.xyz.net:/home/honr

scp honr@appden11.xzy.net:/home/honr/filename . (Copy a file from remote directory to my current directory)

wget &lt;url&gt; (download a file from the url - http or ftp)

wget –recursive &lt;url&gt; (download the whole site)

ftp &lt;url&gt; (connect to ftp server and you can issue command once connected)

curl -I www.google.com/”&gt;http://www.google.com (Get the HTTP Header)

Tuesday, April 29, 2008

Installing Slackware on encrypted volumes

from : http://slackware.osuosl.org/slackware-12.0/README_CRYPT.TXT

The Hacker : Gary McKinnon

In 2002, Gary McKinnon was arrested by the UK's national high-tech
crime unit, after hacking into NASA, and the US military. He says he
... all » spent two years looking for photographic evidence of alien
spacecraft and advanced power technology. America now wants him on
trial, and if tried there he could face 60 years behind bars. Gary's
been banned from using the Internet. We asked for his side of the story
ahead of his extradition hearing on Wednesday...

http://opensc.ning.com/video/video/show?id=610837:Video:1682

gpgdir

gpgdir is a perl script that uses the CPAN GnuPG::Interface module to encrypt
and decrypt directories using a gpg key specified in ~/.gpgdirrc .

Nice tool to have...

visit: http://cipherdyne.org/gpgdir/

Tuesday, April 08, 2008

Zenwalk 5.0 : The Best of The Best

So far, I have tried so many distros like : Fedora, Ubuntu, Mepis, Damn Small, Mandriva/Mandrake,
Wolvix, etc....but only Slackware and Zenwalk have what I wanted. At this moment, I decided to keep Zenwalk because it is small, compact, super fast((probrably the fastest distro)) and
of course since it has a Slackware foundation it is rock-solid and super stable. I can do pretty much whatever I want and will do it for me. Hackers should be in heavens with this distro.

I erased Ubuntu since it was getting fat and installed Zenwalk...I am keeping this one for good.

-2501

Saturday, February 23, 2008

Slackware/Zenwalk: src2pkg tool

from: http://www.linux.com/feature/121499

Slackware's "magic package maker"



By Drew Ames
on
November 30, 2007 (9:00:00 AM)

Slackware
Linux today features a powerful and easy-to-use package management
system, but making Slackware packages has not always been
straightforward. Now Slackware application developers have a tool for
easily making Slackware packages from source code and precompiled
binaries. Src2pkg, now in version 1.6, very nearly lives up to its
author's tag of being Slackware's "magic package maker."



The traditional method for installing software in Linux is to download the source code, uncompress it, and then run ./configure && make && make install.
The result is compiled software that is installed in the (hopefully)
correct spots in the file system. The problem is that there is no easy
way to track which files end up at which locations. The result is that
it can be extremely difficult to remove or upgrade applications.



Package managers solve that problem. Slackware packages are simply
tar archives, compressed with gzip (with the .tgz extension), that
contain a file structure and the compiled files. Slackware's package
tools unpack the package files to the correct directories and maintain
a database tracking where the files are installed.



Many people seem to equate package management with dependency
checking. In fact, dependency checking is a secondary and, from a
Slackware perspective, a largely unnecessary part of package
management. A full Slackware installation includes a reasonably
complete set of libraries so that the dependancies for most software
are already included. However, when other libraries are required, the
application's Web page usually lists them so they can be downloaded,
compiled, and installed. In just over a year of using Slackware, I have
had to install fewer than 10 libraries for the applications I built.



Until recently the two best ways to make Slackware packages were to
use the program Checkinstall or use SlackBuild scripts. Both methods
compile source code, create a directory structure, and package
everything in a .tgz file. Checkisntall works by substituting the checkinstall command for the make install command. Checkinstall works well with Slackware 11 (released October 2006), but an incompatibility with the latest coreutils causes problems for Checkinstall with Slackware 12 (released July 2007).



SlackBuilds are bash shell scripts that guide the configuration,
compilation, and packaging of source code. Well-written SlackBuilds
work extremely well. Slackbuilds.org maintains a high-quality community repository of SlackBuild scripts for a wide varitey of software.



Src2pkg, a better alternative



Src2pkg, a command-line utility for which version 1.0 was released
in February, offers a superior alternative to Checkinstall, and is
useful a SlackBuild script is not readily available for an application
or a user is not experienced enough with shell scripting to make one.
Src2pkg's author, Gilbert Ashley, designed the program to not only
compile Slackware packages from source code, but also from Debian or
RPM package binary files, and from generic binary files, which are
described in the src2pkg manual as "binary content which is usually
installed by running an install.sh script, .run file, or other
installation routine." Finally, src2pkg will create build scripts so
that users can customize their package builds. Providing a number of
options for dealing with source code is necessary because of the wide
variety of ways that source code is distributed.



Version 1.6 of src2pkg was officially released on September 18. I downloaded its Slackware package from from the author's Web site
and installed it. I built and installed four packages to test the
various features of src2pkg, its documentation, and the level of
support offered by its author.



The easiest way to use src2pkg is to log in as root and simply type src2pkg filename,
but the man page list a number of switches to customize the build
process. User options use capital letters following a dash, and build
options use lowercase letters following a dash. I found the following
user options useful for my builds:



  • -C -- place finished package in the current directory
  • -N -- generate a starting src2pkg script and slack-desc description file without building the package
  • -S -- use a shell script for installation (install.sh by default)
  • -VV -- be verbose -- show all output from the build steps
  • -W -- remove (wipe out) temporary build files
  • -X -- run the first src2pkg or src2pkg.auto script in the current directory


In addition to the man page, src2pkg has documentation available in
the /usr/doc/src2pkg-1.6 directory. The additional documentation
consists of HTML-encoded descriptions of the various features and
functions of the application, two README files, and an FAQ text file.
The documentation is informative and helpful. The information is dense
but well-written and contains many helpful suggestions for building
packages.



Building Slackware packages



I built Emacs 22.1
as my first package. This latest version of Emacs was released a month
or so before the latest version of Slackware, but there is to date no
official Slackware package for it and no SlackBuild script available at
Slackbuilds.org. I took advantage of another feature of src2pkg and
used it to download the source file and then build the package with
this command:



src2pkg http://ftp.gnu.org/pub/gnu/emacs/emacs-22.1.tar.gz


Src2pkg successfully downloaded the source code archive, and
configured, made, and built the package. The configure process
automatically found the GTK libraries. When src2pkg finished, it
displayed a message with the location of the Slackware package. I
installed it, and Emacs 22.1 ran without a problem. In fact, all of the
packages I built with src2pkg installed successfully and ran well.



I experienced a problem when I attempted my next built of the desktop publishing program Scribus 1.3.3.9. My first attempt resulted in an error:



Creating working directories:
PKG_DIR=/tmp/scribus-1.3.3.9-pkg-1
SRC_DIR=/tmp/scribus-1.3.3.9-src-1
Unpacking source archive - Done
Decompressing (unknown) archive: Scribus.app.tgz FAILED!
Unable to unpack Scribus.app.tgz. Exiting...
src2pkg FAILURE in UNPACK_RENAME


A quick email message to Gilbert Ashley produced a response with the
solution. The error was the result of "a very rare snag with tarballs
that contain another (or more) tarballs inside them." The fix is to
open the file /usr/libexec/src2pkg/FUNCTIONS and uncomment line 778
from #OPEN_SOURCE=1 to OPEN_SOURCE=1. That line is in the file, but commented because the author is aware of the glitch.



After making that configuration change, the Scribus package built
correctly. The command I used to build Scribus used options to ensure
that I could see the output of the build process, that the finished
package was placed in the directory with the source code, and that the
temporary build files were deleted after the package was built:



src2pkg -VV -C -W scribus-1.3.3.9.tar.bz2


To test src2pgk's ability to convert RPM files and its creation of
build scripts, I downloaded a game called Orbit, file name
orbital-1.01-691.i586.rpm, from the OpenSUSE 10.2 repository. The command src2pkg orbital-1.01-691.i586.rpm -N built a src2pkg script called orbital.src2pkg.auto. The command src2pkg -VV -C -W -X built the package using the src2pkg script in that directory.



The build script generated by src2pkg is very simple, consisting of
44 lines. The following lines are where some users will want to add
configuration options:



# Any extra options go here
# EXTRA_CONFIGS=''
# STD_FLAGS='-O2 -march=i486 -mtune=i686'


Build scripts can help users save specific configuration options so
that they can be repeated each time the package is built. Additionally,
build scripts can be real time-savers when you're troubleshooting a
package build. Simply change a line or two in the script, build the
package again, and repeat the process until the package is just the way
you want it.



My final package build for this test was Opera 9.24.
Opera distributes the browser's source code as a generic binary that
uses an interactive shell script, install,sh, to configure and build
the application. Therefore, src2pkg requires the use of the -S switch.



To build the package, I used the command src2pkg -C -VV -W -S opera-9.24-20071015.6-shared-qt.i386-en.tar.gz. The interactive installation script successfully ran within the src2pgk process.



Tuesday, February 19, 2008

EVE Online Appoints In-World Economist

EVE Online Appoints In-World Economist

Reykjavik, Iceland – June 27, 2007 – CCP
Games, one of the world’s largest independent game developers, today
announced the appointment of a in-world lead economist for EVE Online.
This is the first time an MMOG has commissioned a dedicated real world
economist to operate at this level of economic monitoring and research
for a virtual world. The appointment is a real testament to the growing
intricacy and strength of EVE Online’s thriving virtual economy.

Dr.
Eyjólfur Guðmundsson brings over 15 years’ experience in economic
studies and research, and joins CCP directly from the University of
Akureyri, Iceland, where he among several teaching positions held the
position of the Dean of the Faculty of Business and Science. Prior to
that he was a research associate at the University of Rhode Island’s
Department of Environmental and Natural Resource Economics where he
also completed his PhD. He has authored or co-authored 15 publications.
Dr. Guðmundsson’s first blog on EVE Online economics is at
http://myeve.eve-online.com/devblog.asp?a=blog&bid=481.

Dr.
Guðmundsson will publish quarterly reports on the state of the EVE
Online economy as well as ongoing analysis of other economic
indicators, such as inflation, economic growth and price trends. His
research is designed to provide players with information necessary to
make strategic decisions, but is also expected to have an impact on
future development of the game. Dr. Guðmundsson will also be
responsible for coordinating research initiatives with academic
institutions.

"EVE Online may be set in the future, but the
skills needed to play are rooted in the real world of today. Players
operate vast corporations whose shares are traded in-game among players
so economic strength and agility is key to their success. Just as
entrepreneurs and executives rely on real-world economic indicators,
EVE Online players need timely information and analysis of the in-game
economy,” said Hilmar Pétursson, CEO of CCP Games. “That’s why we
created this important position and we’re delighted to have someone of
Eyjólfur’s caliber and expertise fill the role.”

“Virtual
worlds and MMOGs are emerging as one of the most interesting areas of
experimental economics. Since becoming involved with EVE Online, I have
been exploring the game and growing more fascinated with the community,
its complexities and the unlimited potential of it all. I can see that
CCP has understood that the social structures in EVE Online are far
beyond those of other games,” said Eyjólfur Guðmundsson. “Economic
information is the lifeblood of the game and I believe that, by
ensuring everyone has access to the same data, we will enhance the
player experience and facilitate economic stability in EVE.”

Monday, February 18, 2008

Zenwalk 5.0 just rocks!

Well...I decided to try Zenwalk 5.0 and it blew me awat. I stopped using Ubuntu after
noticing that Zenwalk was very slim and not fatty and it is very fast on my Dell Inspiron 600m.
I was able to run a couple of C programs that I have with no problems and also was able to
connect to my Airport Express very quick. The Iceweasel browser is working perfect and so far I
can tell that is a rock solid distro since it is a Slackware based distro. It is like a mini Slackware distro without all that fat that you see in other distros.

I am keeping this one!

-2501

Sunday, February 17, 2008

My thoughts about Carlos Castaneda's last battle on earth

I am a fan of Carlos Castaneda's book series, I can't deny it. A lot of my friends have
come to me asking me about his death and his departure to the unknown. Why did he not
vanish like Juan Matus?

For me, you can read his books as fiction or real stories but that does not matter to me. His stories,
as in religion, they don't really proof you if they are real or not...you need to have faith. But in
Castaneda's case, I think he showed to see the reality in the world that we live in and how to keep
your discipline in whatever you do 24 hrs a day and 365 days. Tha't what matters to me and at
this point in my life, I don't think about heavens or hell because there is no real proof about it.
I already accepted death as my advisor. Death is always walking next to us and someday he will
look at my straight to my eyes telling me that is time to go and I won't fight back.

Life is short and I play very hard everyday. Hopefully, I can negotiate some kind of deal with
Death in my last battle on earth.

Saturday, February 02, 2008

Interview: Beating Colossus

from: http://www.netbsd.org/gallery/schueth-interview.html


Joachim Schueth has beaten a reconstruction of the famous
Colossus Mark II
code breaking machine in November 2007. The Colossus computers were
used in World War II to break the German encrypted messages.
Equipped with a NetBSD-powered laptop and profound knowledge of
cryptography and the Ada programming language, Schueth has won the
code-cracking challenge. We talked with him about the historical and technical
backgrounds of the
Cipher Event  and the tools he has used.

Powered by ScribeFire.

Sunday, January 13, 2008

Israeli prisoners uncover what may be the Holy Land's oldest church

israelinsider: Culture: Israeli prisoners uncover what may be the Holy Land's oldest church

Prisoners excavating a site
near the biblical Armageddon uncovered what archeologists said Sunday
may be the Holy Land's oldest church.


Told to dig in an area where the Prisons' Authority wants to build
new wards for 1,200 Palestinian security prisoners, the Israeli
criminals uncovered mosaics that experts said was the floor of a church
from the third century, decades before Constantine legalized
Christianity across the Byzantine Empire.


"What's clear today is that it's the oldest archaeological remains
of a church in Israel, maybe even in the entire region, whether in the
entire world, it's still too early to say," said Yotam Tepper, the
excavation's head archaeologist.


Israeli officials were giddy at the news, with Prime Minister Ariel Sharon calling the church "an amazing story."



Vatican officials also hailed the find.



"A discovery of this kind will make Israel more interesting to all
Christians, for the Church all over the world," said Archbishop Pietro
Sambi, the Vatican's envoy to Jerusalem. "If it's true that the church
and the beautiful mosaics are from the third century, it would be one
of the most ancient churches in the Middle East."


Two mosaics inside the church -- one covered with fish, an ancient
Christian symbol that predates the cross -- tell the story of a Roman
officer and a woman named Aketous who donated money to build the church
in the memory "of the god, Jesus Christ."


Pottery remnants from the third-century, the style of Greek writing
used in the inscriptions, ancient geometric patterns in the mosaics and
the depiction of fish rather than the cross indicate that the church
was no longer used by the fourth century, Tepper said.


The church's location, not far from the spot where the New
Testament says the final battle between good and evil will take place,
also made sense because a bishop was active in the area at the time,
said Tepper, who works with the Israel Antiquities Authority.


The inscription, which specifies that Aketous donated a table to
the church, indicates the house of worship predated the Byzantine era,
when Christians began using altars in place of tables in their rituals,
Tepper said. Remnants of a table were uncovered between the two
mosaics.


The building -- most of which was destroyed -- also was not built
in the Basilica-style that was standard under the Byzantines, he added.



Stephen Pfann, a biblical scholar and professor at the Holy Land
University, said the second and third centuries were transitional
periods where people sought to define their religious beliefs and modes
of worship. Iconography and inscriptions found in Nazareth and
Capernaum -- places where Jesus lived -- show that people went there to
worship, but most did so secretly.


"This was a time of persecution and in this way it is quite
surprising that there would be such a blatant expression of Christ in a
mosaic, but it may be the very reason why the church was destroyed,"
Pfann said.


About 50 prisoners were brought into the high-security Megiddo
Prison to excavate the area before construction began. Ramil Razilo and
Meimon Biton -- the two criminals who first uncovered the mosaics --
used yellow sponges and buckets of water on Sunday to wipe dirt off
their findings.


Initially thinking he was just removing useless rubble from the
area, Razilo was shocked when the edge of the elaborate mosaic appeared
at the tip of his shovel, putting him in the media limelight just a
month before completing a two-year sentence for traffic violations.


"We worked for months to find the parts," Razilo said. "First we
found the first part, the corner, but we didn't understand what was
spoken of, but we continued to look and slowly we found this whole
beautiful thing."


Israel would like to make the site -- currently covered by a white
makeshift tent -- into a tourist attraction, but won't be able to do so
without uprooting either the mosaic or the prison.


The Prisons' Authority and the Antiquities Authority are
considering their options, and the dig will continue as archeologists
try to uncover the rest of the building and its surroundings, including
what they believe could be a baptismal site, Teppler said.


Joe Zias, an anthropologist and former curator with the Antiquities
Authority, questioned the dating of the find. There is no evidence of
churches before the fourth century, he said. The building may have been
in use earlier, but most likely not for Christian religious purposes,
he said.


"They're going to be hard, hard-pressed to prove it ... because the evidence argues otherwise," Zias said.

Powered by ScribeFire.

Software for Codebraking of the Lorenz SZ42 + NetBSD


Just by using NetBSD OS and a simple laptop, an amateur cryptographer was able to
break the Lorenz SZ42. Nice!!!

SZ42 codebreaking software

Technorati Tags:

Powered by ScribeFire.

NASA Mars Images Reveals a Doorway Structure



NASA Mars Images Reveals a "Doorway" Structure | The Daily Galaxy: News from Planet Earth & Beyond
There is a strange door-like structure at the base of the mountain formation from a NASA image of Mars that is causing a stir. The first person to notice it wasn’t a NASA scientist, however, but rather a Russian reader of the portal R&D.Cnews, Alexander Novgorodov. Taking a closer look at an image taken by the spacecraft Mars Reconnaissance Orbiter, he noticed an unusual morphology, which looks strikingly like a manmade doorway.


Powered by ScribeFire.

UFO - The Navy and the SETI