Unix system administration tools

how to install unix tools on windows and what are unix power tools pdf free downloads
JackBrown Profile Pic
JackBrown,Georgia,Professional
Published Date:11-07-2017
Your Website URL(Optional)
Comment
Unix Tools Markus Kuhn Original notes by A C Norman Part Ib1 Introduction This course is called “Unix Tools”, and this is because the various support utilities that it discusses originated with Unix and fit in with a philosophy that was made explicit from the early days of that operating system. It should however be noted thatmostoftheparticularprogramsortoolsmentionedhavebeenfoundsufficiently useful by generations of programmers that versions have been ported to other oper- ating systems, notably Microsoft Windows, and so whatever platform you use now or expect to use in the future there is something here that may prove relevant to you. EvenwhenrunningunderWindowsitseemspropertorefertothingsas“Unix” tools, both for historical reasons and because the style of interface that these tools 1 providecontrastsquitestronglywiththatseenin(say)theMicrosoftVisualStudio . This course is short and it is also unusual in that no questions on it will appear on the examination papers at the end of the year. These two facts may lead to the impression that the department considers the material covered unimportant or optional. Any such impression is ill-founded. It is anticipated that techniques mentioned during this course will beof relevance in later practical work: specifically both the Group Project this year and your individual project next year. Familiarity with and competent use of standard tools and techniques can make your work on these projects significantly more efficient, and all assessment of practical work are entitled to assume this fluency when judging whether the amount of work done was more or less than could be reasonably expected of you. These printed lecture notes reflect mostly the content of the original course given by Arthur Norman with some additions and updates from the lecturers who have taught it since then. They still complement the course very nicely but do not aim at covering all topics discussed in the course. They should therefore be studied in addition to the course presentation slides that are available on http://www.cl.cam.ac.uk/teaching/current/UnixTools/ This Web page also has links to further material such as manuals for some of the discussed toolsin an easy to print format as well as links to related online resources. I believe that your practical skills will only develop with practical experience, so I would urge you all to try using each of the tools and techniques mentioned here. I will generally only explain the simple ways of using each facility, and as you gain confidence it may be that you will benefit if you deepen your understanding by reading the man pages or other documentation. Many of the commands will display a concise ‘reminder’ of their usageby simply invoking themwith theargument -h or help. “Info pages” provided for GNU tools can also be useful, and can be viewed with the info command or in emacs by invoking Ctrl-h i. Over the past decade, there have been two attempts to standardise a minimal com- mon set of classic Unix tools, including the shell. One is the IEEE 1003.2 POSIX Shell and Utilities specification, the other is the Open Group’s Single Unix Specifi- cation. Inlate2001,boththesestandardshavefinallybeenmergedintoasingleone, 1 The environment within which one influential vendor’s set of native Windows development tools reside. 2which can now be freely accessed online as the Single Unix Specification, Version 3 at http://www.unix.org/, which is also available in printed form in the Computer Laboratory’s library (ST.8.7). A thread I hope will run through my presentation is that the tools discussed are not totally arbitrary in their design (despite some of the initial impressions that they give). There is at least a part of their construction that concerns itself with compatibility of ideas from one tool to the next and of exploitation of powerful and general computer science fundamentals such as regular expressions. The “Unix philosophy” that I mentioned earlier is that (ideally) the world would contain a number of tool components, each addressing just one problem. Each individual tool would then be small, easy to learn but completely general in its treatment of the limited class of problem that it addressed. The Unix approach is then to solve typically messy real-world problems by combining use of several such basic tools. In this spirit there will be a small number of major ideas underlying all the material covered here: 1. Complex tasks are often best solved by linking together several existing pro- grams rather than be re-inventing every possible low-level detail of the wheel over again; 2. Regular expressions, seen in the Part Ia course as a mathematical abstraction of the patterns that finite-state machines can process, generalise to provide amazingly powerful and flexible (if sometimes obscure-looking) capabilities; 3. There should be a smooth transition between the tasks you perform one at a time interactively and those that need realprograms written to perform them. The Unix tool tradition is particularly strong on helping to automate tasks of medium complexity; 4. The first Unix tools originated around 30 years ago at AT&T Bell Labs when the main input/output devices available were slow and noisy teletype termi- nals. At the time, the most convenient software was the one that could be used with the fewest keystrokes and that kept the output short and concise. I/O facilities have become far more sophisticated since then, but the Unix traditionof compact text command notations remained highly popular among expert users, not only because it is well suited for automating tasks. The mouse/menu interfaces pioneered by Xerox and Apple are utterly admirable for making editors easy to use for the untutored or casual user. They are also helpful when your main concern is the visual appearance of a page of text, since they can make it easy to select a block of text and change its attributes. But for very many other tasks a keyboard based (vermin free?) approach can let a short sequence of keystrokes achieve what would otherwise require much mouse movement and the frequent and distracting change of focus between mouse and keyboard. The learning effort required pays off as you are able to get your work done faster. The topics covered here are somewhat inter-related and so a strictly linear and compartmentalised coverage would not be satisfactory. I will thus often refer back to concepts sketched earlier on and flesh out additional details in examples given 3later in the course. In four lectures it must be clear that I can not cover all of the facilities that Unix provides, and the language perl that I discuss towards the end could of itself justify a full-length lecture course and a host of associated practical classes. You must thus be aware that this course is going to be superficial, and those of you who already count yourselves as experts are liable to find many of your favourite idioms are not covered. However the lectures can (I hope) still form a good starting point for those who are relative Unix beginners, while these notes can be a reminder of what is available, a modest source of cook-book examples and a reminder of which parts of the full documentation you might want to read when you have some spare time. 2 The “Unix shell” These first few sections will recapitulate on material that you will (mostly) have come across in the introductions to Unix you had at the start of the Part Ia Java course, or that were mentioned in part of the Operating Systems thread. Repetition in these notes will help keep this course self-contained, although the lectures will skim over this information very rapidly. See 1 for a tolerably concise expansion of what I have included here. Part of the Unix design involved making all the functionality supported by the operating system available as function calls, and making as many of these calls as possible available to all users. In other words a deliberate attempt was made to arrange that Unix security only needed a very small set of operations to be run as privileged system tasks. Partly as a demonstration of this, the system ensured that the shell could be written as an ordinary user-mode program. The shell comprises the fundamental interface the user sees to Unix: it is the component of Unix that lets you type in commands which it then executes for you. Many other operating systems give their shells private and special ways of talking to the inner parts of the operating system so that it is not reasonable for a user to implement a replacement. Two consequences have arisen. The first is that there are many different Unix shells available. This can be a cause of significant confusion The second is that good ideas originally implemented in one of these shells have eventually found their way (often in slightly different form) into the others. The result is that the major Unix shells now have a very substantial range of capabilities, and the way in which these are activated has benefited from a great deal of experimentation and field-testing. The original Unix shell is known as the Bourne Shell (after Steve Bourne, who after leaving Cambridge went to Bell Laboratories where his enthusiasm for Algol 68 had its effects). The major incompatible shell you may come across is the C shell, where the “C” is both to indicate that its syntax is inspired by C, and also (given a Unix tradition of horrible puns) because one expects to find shells on beaches, so a C-shell is an obvious thing to talk about. There have been many successors to these two shells. The one that you are (strongly) encouraged to use here is basically upwards compatible with the Bourne Shell, and is know as bash, the Bo(u)rn(e)- Again SHell. bash is part of the excellent GNU reimplementation of the Unix tools, and is available on almost every platform imaginable. Anyone thinking of using a 4C-shell derivative is advised to read Tom Christiansen’s article “Csh Programming Considered Harmful” 5 first. Whenever you are typing in a command at the usual Unix command-prompt you are talking to your current shell. Also if you put some text in a file and set the file to have “executable” status (eg by saying chmod +x filename) then entering the name of the file will get the shell to obey the sequence of commands contained. In such caseitisconsidered standardandpolitetomakethefirst lineofthefilecontain the incantation /bin/sh where /bin/sh is the full file-name of the shell you are intending to use. The “” mark makes this initial line a comment, and the following “” and the fact that it is the very first thing in the file mark it as a special comment used to indicate what should be used to process the file. Some of the examples given here will be most readily tested interactively while others will be best put into files while you perfect the lengthy and messy runes. Some small and common things that you may have thought of as commands are in 2 fact built into the shell (for instance cd ), particularly those that change the state of the shell. But the most interesting shell features relate to ways to run other programs and provide them with parameters and input data. 3 Streams, redirection, pipes Central to the Unix design is the idea of a stream of bytes. Streams are the founda- tion for input and output, and at one (fairly low) level they are identified by simple small integer identifiers. When a program is started the shell provides it with three standard streams, with numeric identities 0, 1 and 2. The first of these is standard input, and programs tend to read data from there if they have nothing better to do. The second is a place for standard output to be sent, while the third is intended for error messages. If you start a program without giving the shell more explicit informationit willconnect your keyboard so it provides data forthestandard input, and it will direct both of the default output streams to your screen. These standardstreamscanberedirected sothattheyeitheraccessthefilingsystem or provide communication between pairs of programs. The importance (for today) offile redirection isthat itmeans that aprogramcanbewrittenso that it just reads fromitsstandardinputandwritestoitsstandardoutput. Usingredirectiontheshell can then cause it to take data from one file and write its results to another. The program itself does not have to bother with any file-names or distinctions between files and the keyboard or screen. my_program input_data.file output_data.file 2 For the purposes of this course I am going to suppose that some of the basic Unix commands are already familiar. But the suggested textbook will give brief explanations even if I do happen to mention something that you have not seen or that you have forgotten about. 5If is used as a redirection operator the new data is appended to the output file. This can be very useful when executing a sequence of commands: /bin/sh echo "Test run starting" log.file date log.file my_program log.file echo "end of test" log.file Two programs can be linked so that the (standard) output from the first is fed in as the (standard) input to the second. The fact that this is so very easy to arrange encourages a style where you collect a whole bunch of small utilities each of which performsjust onesimple task, andyou thenchain themtogether as pipestoperform some more elaborateprocess. Iwill use this inquite a number ofthe examples given laterinthesenotes. Oneusefulprogramtoputinapipeisteewhichpassesmaterial straight from its input to output, but also diverts a copy to a log-file. The following (not very useful) example uses cat to copy an input file to its standard output. tee then captures a copy of this to a log file, and passes the data on to my program for whatever processing is needed. cat input.file tee log.file my_program A further use of pipes and tee is as follows where the standard output from a test run is permitted to appear on the screen but a copy is also diverted to a log file in case detailed examination is called for at a later date. The output is piped through moretomakeitpossibletoreaditallevenwhenitistoolongtofitonasinglescreen. The Unix enthusiast would point out the power of pipes where the functionalities of both tee and more are being combined without the need for a messy composite utility. my_second_program tee log.file more Especially when debugging codeit isoftenimportant to be ableto redirect theerror output as well as the regular one. This is one of the areas where the exact syntax to be used depends on which shell you are using, and so my use of the Bourne Shell or oneof itsderivatives does matter. Forsuch shells the form2 error.file redirects the standard error file (descriptor number 2) so that material is sent to the named file. Its also sometimes useful to be able to redirect standard error to standard output, so you feed the combined stream into another program. Again, the exact syntax is shell specific, but the following works for bash and is quite useful when you’re compiling a program with lots of errors: make my_program 2&1 more When redirecting standard output and standard error to a file, the order of the redirections is reversed in some sense. In other words, we write 6make my_program log 2&1 so that the standard output file descriptor is duplicated after being redirected, whereas if we wanted to do the same thing at the beginning of a pipeline we would write make my_program 2&1 ... This is because the redirection of the standard output implied by the pipe separa- tor is performed before any redirection specified in the commands composing the pipeline. Likewise, in a shell script it is sometime useful to be able to echo messages to standard error: echo 'Arghh It's all broken' 1&2 A final common redirection feature known as ‘Here Documents’ is activated using . This makes it possible to embed an input document within a shell script file. After the doubled angle bracket you put some word, and the standard input to the command activated will then be all lines form the command input source up to one that exactly matches this word: /bin/sh cat XXX output.file line 1 to go in the new file line 2 to go in the new file XXX 4 Command-line expansion When the shell is about to process a command it first performs some expansion on it. It will interpret some sequences of characters as patterns, and replace them with a list of all the names of files that match those patterns. As a special and perhaps 3 common case the single character “” is a pattern that matches the names of all files in the current directory. For these purposes a sub-directory is just another file. Because this wild-card expansion is performed by the shell before a command is executed its effects are available whatever command you are using. Perhaps a convenient one to try is echo which just prints back its parameters to you: echo will display a list of the files in the current directory. Of course to achieve this effect you would normally use ls which lays out the list neatly and provides lots of jolly options, but use of a pattern means that you can send the list of file-names to any program, not just to echo. For now the important components of a pattern are 3 Well all except for the “hidden” file-names that start with a dot ... 71. Most characters stand literally for themselves; 2. An asterisk () matches an arbitrary string of characters. In file-name expan- sionfile-namesthatstartwithadot(.)aretreatedspeciallyandthewild-card asterisk will not match that initial dot; 3. A question mark (?) matches any single character, again except for an initial dot; 4. Abackslash(\)causesthefollowingcharactertoloseanyspecialmeaning,and so if you need a pattern that matches against an asterisk or question mark (or indeed a backslash) one may be called for; 5. Quotation marks (either single or double) can also be used to protect special characters. Note that as well as and ? the Unix shell may be treating and all sort of other punctuation marks specially, so in case of doubt use quotation marks or backslash escapes fairly liberally Inside single quotation marks('or’),allcharactersloosetheirspecialmeaning,whereasinsidedouble quotation marks (") the characters \, , and ` keep their special rˆoles. In addition to file-name expansion the shell expands commands by permitting refer- encetoenvironment variables. Thisisindicatedbywritingavariablenamepreceded by a dollar sign (). It is also legal to write a variable reference as name where the braces provide a clear way of indicating where the variable name ends. There are liable to be quite a few variables predefined for you, some set up by the shell itself, some by scripts that are run for you when you log on. To set a new variable you may write variable_name=value ; export variable_name where the use of export causes the variableto be visible not only within the current shell but also to all programs called by it. A shorthand form of the above is: export variable_name=value The built-in command set displays all variables and their values accessible in the current shell. Thecommand printenv ontheotherhanddisplays onlytheexported environment variablesthatcanbeseenbyanyprogramcalledfromthecurrentshell. A slightly more concrete example shows that with a lot of shell variables set up, the actual commands that you issue may turn out to be almost entirely built out of references to variables. LANGUAGE=java COMPILER=javac OPTIONS= SOURCE=hello_world COMPILER OPTIONS SOURCE.LANGUAGE 8Especially in script files there is a great deal to be said for establishing variables to hold the names of compilers that you use and of the options that must be passed to them, since it makes everything much easier to alter if you move your programs to a slightly different environment later on. For instance on various Unix machines that I have used the C compiler is called sometimes cc, sometimes gcc, and sometimes c89, maybe ncc and even /opt/EA/SUNWspro/bin/cc Changing just one setting of a variable to allow for this is neater than making extensive edits throughout long scripts. A rather different but convenient use of variables is to hold the names of directories. If you often work within a directory with a rather long name, say /home/acn1/ Project/version-1_0_3/source, then you might set a variable (say SRC) to that long string. Then you can use SRC freely on the command line to allow you to select your important directory or refer to files within it with much less typing that you might otherwise need. An additional advantage of this strategy is that when you move on to version 1_0_4 you can just change the one place where you define this variable and now you will naturally access the newer location. Another key-stroke saving tip is to define shell functions that alias other commands (or sequences of commands). For example: function m () more function ll () ls -al The same effect could be achieved by creating shell scripts and placing them on the search PATH, but defining a function is more efficient. Within a shell script 1, 2, ... refer to arguments passed when the script was started, expands toalistofallthearguments, whileisreplacedbythenumber of arguments provided. Consider a file called demo that contains /bin/sh echo Start of 0, called with arguments echo All args: echo Arg 1 = 1 echo Arg 2 = 2 Let's put another comment here. which also illustrated that “argument zero” will be taken to refer to the name of the script that is being executed. A use of the above script might be demo hah what I typed in Start ./demo with 1 args output from script All args: hah Arg 1 = hah Arg 2 = 2 = empty string Redirection allows one to send the output from a command to the standard input of another. Sometimes you may want to incorporate the (standard) output from one command as part of another command. This is achieved by writing the first 9command within (...) or “back-quotes” (‘...‘), which appear in many fonts as grave accents (`...`). A sensible example of this in use will be shown later on, but for now I will illustrate it with echo "Today is (date -I)." Today is 2007-10-11. 5 find and grep OneoftheexpectationsthatcomeswithUnixisthatitshouldbeeasytospecifythat operations should be performed upon multiple files by issuing just one command. File-nameexpansionasdescribedearlierprovidesthesimplest wayoflistingabunch of files to be processed: sometimes it is useful to have rather more subtle selection and filtering procedures. The tool find is used when this should be based on the file’s name and attributes (eg date of last update), while grep inspects the contents of files. 5.1 find The find command takes two groups of arguments. The first few arguments must be path-names (ie typically the names of directories), and the subsequent ones are conditions to apply when searching through the directories mentioned. The condi- tions that can be used include tests on the name, creation and modification date, access permissions andowner offiles. Aspecial“condition” -printcauses thename 4 of the file currently being processed to be sent to the standard output, and it will often be shown as the final item on the command-line. Unless combined using -o which stands for OR all previous conditions must be satisfied if the -print is to be activated. Conditions may be negated using . The conditions -atime, -mtime and -ctime test theaccess, modification or creation times of files. They are followed by an integer. If written unsigned they accept files exactly that many days old. If it is written with a + sign they accept files older than that, and a - asks for files younger. A condition -name is followed by a pattern much like those previously seen in file-name expansion, and checks the name of the file. You will normally need to put a backslash before any special characters in the pattern. There are lots of other options, including ones to execute arbitrary programs whenever a file is accepted, but those of you who want to use them can read the full documentation. Plausible uses are illustrated in the following examples: 1. List all files that have not been accessed for at least 150 days. These are obvious candidates for moving to an archive, or compressing or even deleting Note that by searching in the current directory (.) even files whose names start with a dot will be listed here. 4 Some implementations of find have an implicit -print if no other action is specified, but do not rely on that even if the one that you usually use does it 10find . -atime +150 -print 2. List files that have been created during the last week. File-name expansion means that the is turned into a list of all files in the current directory apart from those whose name starts with dot. Reminding oneself of recently created or modified files may be useful when you want to consider what to back up. find -ctime -7 -print 3. Delete all files in the current directory or any sub-directory thereof if their name end in “.old”. The -i flag to the rm command gets it to ask the user for confirmation in each case, which makes this command a little safer to issue. Observe the back-quotes to get the output from find presented as command-line arguments to the rm command. rm -i `find . -name \.old -print` Alternatively the xargs command could be used to build the call to rm. This command builds a command from an initial command name together with whatever it finds on its standard input, and works better than the previous 5 scheme when there are a very large number of arguments to be passed : find . -name \.old -print xargs rm -i The alternative find . -name \.old -exec rm -i \; calls the command rm -i individually for each file found, which is far less efficient in this particular example but useful for calling programs other than rm that can handle only one file-name parameters at a time. The marks where the file name will be inserted and the semicolon marks the end of the -exec “condition”. 4. Search the current directory and subdirectories for all entries that are files (excluding directories), and that have the ‘other user’ execute permission bit set. find . -type f -a -perm -001 -print 5. List any files in my program directory that are empty and whose name does not start with a tmp. find program -size 0 -name tmp\ -print find is obviously valuable as an interactive tool, perhaps especially for helping keep your file-space tidy. It is also a valuable building-block in scripts. 5 Specifically when you issue an ordinary command, including via the backquote construction, there may be a limit on the length of the command-line that can be handled. xargs goes to some trouble to invoke programs properly even when they are to be passed utterly huge numbers of arguments. 115.2 grep grep is the first tool that I will describe here that makes serious use of the Unix interpretation of regular expressions. Its use is grep options regular-expression file(s) where the options may include -i to make searches case insensitive or -c to make it just count the number of matches found in each file. Normally grep searches through all the files indicated and displays each line that contains a match for the given regular expression. The option -l gets you just a list of the names of files within which there are matches. The sense of the matching can be inverted with the -v option. You need to be aware that there are related commands called egrep and fgrep that supportdifferentdegreesofgeneralityinthepatternmatching. Furthermoreonsome computers you will find that the program invoked by the grep command has either more or less capability than is mentioned here. This all arises because matching against very general regular expressions can be an extremely expensive process so the early Unix tool-builders decided to provide three different search engines for trivial(fgrep), typical(grep) andambitious(egrep) uses. Indescribing theregular expression formats that are available I will mark ones that need egrep with an (†). Given that todays computers are pretty fast you might like to standardise on using egrep to reduce your worry on this front. All the real interest and cleverness with grep comes in the regular expressions that it uses. You will recall the rather spartan definition of a regular expression used in the Part Ia course that introduced them. Those provided everything that was actually needed to describe any regular language, but in many realistic cases there is a very great benefit in using additional short-cuts. The following are the more important of the constructs supported by grep, and as we will see later most of them are also used with sed and perl as well as various other Unix-inspired tools. a,b,... In general characters in a regular expression stand for themselves. If one of the special characters mentioned below is needed as an ordinary literal that can be arranged by sticking a backslash in front of it. Note then (of course) that to get this backslash through to where grep will find it you may need either quote marks or a yet further backslash, and things can start to look messy A B Concatenating regular expressions works in the obvious manner. An effect is that strings of literal characters can be given and match words in much the way you might expect; ( A ) (†) Where necessary you may use parentheses to group sub-parts of a complicated expression; A B (†) Alternationiswrittenusingaverticalbar,whichmaybereadas OR; A The star operator applies to the previous character or bracketed ex- pression, and matches zero or more instances of it; 12A+ (†) Much like the star operator, but accepts one or more instances of things that match the given pattern; A? (†) Zero or one matches for the given item; A\n,m\ Fromntomrepetitions. Amazinglythisconstructisonlyguaranteed tobeavailableingrepandforegrepyoumaybeabletoachievethesameeffect with a pattern that omits the backslashes. This is a natural generalisation of the more common cases that use , + and ?. a-z This matches a single character, which must beoneofthe oneslisted within the brackets. Ranges of characters are shown with a hyphen. If you put a hyphen or close (square) bracket as the very first character then it is treated as a literal, not as part of the syntax of the construct. The mark “” can be used at the start of a pattern to negate the sense of a match; . A dot matches any single character except a newline. Thus . matches any string of characters not including newlines; and Normally patterns are looked for anywhere within a source line. If you put a at the start of an expression it will only match at the start of a line, while a at the end ensures that matches are only accepted at the end of a line. Use both if you want to match a whole line exactly; \ and \ These allow you to insist that a certain place within your pattern matches the start or end of a word. This facility is only supported in some implementations of grep. Again I think that the possibilities are best explored via some examples. Firstly I willgivejustregularexpressions, andthenIwillbuildthemintocompletecommands showing grep in a potentially useful context: 1. A pattern that matches words that start with a capital letter but where the rest of the characters (if any) are lower case letters and digits or underscores A-Za-z0-9_ 2. The string “include at the start of a line, apart from possible leading blanks include 3. A line consisting of just the single word END END 4. A line with at least two equals signs on it with at least one character between them =.+= 135. Find which file (and which line within it) the string class LostIt is in, given that it is either in the current directory or in one called extras grep 'class LostIt' .java extras/.java 6. Count the number of lines on which the word if occurs in each file whose name is of the form .txt. grep -c "\if\" .txt The output in this case is a list showing each file-name, followed by a colon and then the count of the number of lines which contain the given string. The use of \..\ means that if embedded within a longer word will not be recognised. 7. As above, but then use grep again on the output to select out the lines that endwith:0,iethosewhichgivethenamesoffilesthatdonotcontaintheword if. This also illustrates that if no files are specified grep scans the standard input. grep -c "\if\" .txt grep :0\ 8. Start the editor passing it the names of all your source files that mention some variable, presumably because you want to review or change just those ones. You could obviously use the same sort of construct to print out just those files, or perform any other plausible operation on them. emacs `grep -l some_variable .java` Note that grep has its own idea of what a “word” is, and so in some circumstances you may want to write a more elaborate pattern to cope with different syntax. Regular expressions that look only at individual lines do not provide a sufficiently general way of describing patterns to allow you to do real parsing of programming languages, and as present in grep they do not even make it easy to distinguish between the body of your program, comments and the contents of strings. However with amodest amount of ingenuity they canoftenlet you specify things well enough that you can search for particular constructions that are interesting to you. Some people may even go to the extreme of laying out their code in stylised manners to make grep searches easier to conduct 6 Exit codes and conditional execution When commands terminate they exit with an integer value known as a exit code. In general, non-zero exit codes indicate that some sort of error or other abnormal con- dition occurred. The exit code of the most recently executed foreground command can be accessed via the ? environment variable. For example: 14echo "numbers" egrep '0-9+' /dev/null ; echo ? 1 echo "123" egrep '0-9+' /dev/null ; echo ? 0 It is possible to make the execution (or not) of commands dependent on the exit code of previous commands using the and && operators. These operators provide short-circuiting OR and AND functions respectively. In the case of A B, B is executed iff A has a non-zero exit status. Conversely, for A && B, B is executed iff A has a zero exit code. For example: echo "123" egrep '0-9+' /dev/null && echo "number" ping -c1 srv1 ping -c1 srv2 echo "network down?" More complex conditional execution can be achieved using if/then/else, for, while and case statements. They are frequently used in conjunction with the 6 test command, forwhich the abbreviation ‘’is frequently used . test can beused to check the existence, access permissions, and modification times of files, as well to perform comparisons between pairs of strings and even integers. For example: if -e foo -a -e bar -o 1 = "skip" then echo "files foo and bar both exist, or arg1 == skip" fi while -r foo do echo "foo is not readable" ; sleep 1 ; done case `arch` in hpux) PSOPT="-eafl" UNIX-style 'ps' options for HP/UX ;; ) PSOPT="auxw" BSD-style options are OK for Linux ;; esac When using these commands it can be tricky to remember where it is necessary (or forbidden) to insert command terminators, such as the newline character or ‘;’. Perhaps, the easiest way to learn more about these commands is by examining other people’s shell scripts. A good source is to look in a Unix system’s boot scripts directory. On most Linux boxes this is /etc/init.d or /etc/rc.d/init.d. 6 On many older Unix systems, /usr/bin/ was a filesystem link to test. I’ve heard at least one apocryphal story of an over zealous sysadmin tidying up /usr/bin/by deleting ‘spurious’ files such as ... 157 Shell script examples The following section contains a number of fragments from shell scripts that I find useful. I make no warranty as to their fitness for the purpose intended, or even that they demonstrate good programming style. The following is handy for searching a directory hierarchy of ‘C’ source files looking for a particular identifier. Since all command line arguments are passed in to grep, it’s possible to ask it to e.g. ignore case using the -i option. function trawl () find . \( -name '.chsS' -o -name '.ch?' \) \ -print xargs fgrep -n This function can be used to send a kill signal to the named process(es): function killproc() pid=`/usr/bin/ps -e /usr/bin/grep 1 /usr/bin/sed -e 's/ //' -e 's/ .//'` case "pid" in 1-9) kill -TERM pid;; esac The following function can be used to add an entry to the tail of the PATH string, or deletes the entry if it’s already present. function addpath () case PATH in 1) PATH=`echo PATH sed "s+:1++"` ;; ) PATH="PATH:1" ;; esac echo PATH=PATH 8 make and project building Whenbuilding aseriousprogramyouwillhaveanumber ofdifferent sourcefilesand a collection of more or less elaborate commands that compile them all and link the resulting fragments together. For large projects you may have helper programs that get run to generate either data files or even fragments of your code. When you have edited one source file you can of course re-build absolutely everything, but that is obviously clumsyandinefficient. makeprovides facilitiessoyoucandocument which binary files depend on which sources so that by comparing file date-stamps it can issue a minimal number of commands to bring your project up to date. The information needed has two major components. The first is a catalogue of which files depend on which other ones. The second is a set of commands that can 16beexecutedtorebuildfileswhenthethingsthattheydependuponarefoundtohave 7 changed. By default the utility looks for this information in a file called Makefile . In a practical Makefile there will often be a substantial amount of common material used to make the actual rules themselves more compact or easier to maintain. In particular variables will often be used to specify the names of the compilers used and all sorts of other options. My first sample (or template) Makefile will be for use with an imaginary programming language called frog. It imagines that source files are first compiled into object code, and then linked to form the final application. Makefile for "princess" program COMPILE = frogc OPTS = -optimise -avoid_lillypads=yes LINK = froglinker princess: crown.o tadpole.o (LINK) crown.o tadpole.o -to princess crown.o: crown.frog (COMPILE) (OPTS) crown.frog tadpole.o: tadpole.frog (COMPILE) (OPTS) tadpole.frog test.log: princess test.data date test.log princess test.data test.log end of Makefile The above file starts with a comment. Each line that begins with is comment. Next it defined three variables, which are supposed to be the name of the compiler, options to pass to the compiler and the name of the linker. Separating these off in this way and then referring to them symbolically makes things a lot easier when you want to change things, which in the long run you undoubtedly will. Note the use of round parentheses rather than curly braces to access Makefile variables. The next few blocks are the key components of the file. Each starts with a line that has a target file-name followed by a colon, and then a list of the files upon which it depends. Following that can be a sequence of commands that should be obeyed to bring the target up to date. These commands must be inset using a tab character (n.b. not spaces). A line that does not begin with a tab marks the end of such a sequence ofcommands. Moreorlessanywhere itispossibletorefertovariables, and using a dollar sign you can refer to either something defined in the Makefile itself or to an environment variable exported by the shell. Additional variable definitions can be passed down when make is invoked. 7 Youcanalsousemakefilewithouta capital. ManyUnix users(slightly)prefer thecapitalised version because it results in the file being shown early on in the output from ls when they inspect the contents of a directory. 17To use this you just issue a command such as make test.log, where you specify one of the declared targets. make works out how many of the commands need to be executed and so in the above case if nothing at all had been pre-built it would execute the commands frogc -optimise -avoid_lillypads=yes crown.frog frogc -optimise -avoid_lillypads=yes tadpole.frog froglinker crown.o tadpole.o -to princess date test.log princess test.data test.log If you do not tell make what to do it updates whatever target is mentioned first in your Makefile. A true Unix enthusiast will feel that the above Makefile is too easy to read and that it does not include enough cryptic sequences of punctuation marks. A slightly better criticism is that as the number of source files for our princess increases the contents of the file will become repetitive: it might be nice to be able to write the compilation command sequence just once. This is (of course) possible. In fact there will usually be a whole host of built-in rules andpredefined variables (they are typically called macros in this context) that know about a wide range of languages, and the most you will ever want to do will be minor customisation on them. To illustrate the power of make I will stick with my imaginary Frog language. To tell make a general rule for making .o files from .frog ones you include something like the following in your Makefile: .frog.o: (COMPILE) (OPTS) where the is a macro that expands to the name of the source file that needed recompilation. There are other slightly cryptic macros that can be used in rules like this. These funny automatically defined macros are needed so that you can refer to the files that the general file-suffix-based rule is being used on. expands to the name of the current target, ie the file that is to be re-created; expands to the name of the “prerequisite” file, ie the source file that had been seen to have a newer time-stamp than the target; is like except that what it expands to does not include the file suffix. Bydefault makestops if oneofthecommands it tries torunfails, andit then deletes the associated target. The idea here is that if just one of your source files contains a syntax error then everything will be re-built up to the stage that that is detected, andthingswillbeleft sothatasubsequent invocationof makewilltrythatfileagain and then continue. There are in fact a few further things that I ought to mention with regard to make: if you use file suffixes other than the ones that are initially known about you may need to declare them and specify their ordering. In the case being discussed here it 18would be necessary to specify first an empty list of suffixes (to cancel the built-in 8 list ) and then list the ones that are desired. The various file suffixes should be listed in order, with generated files first and original source ones last: .SUFFIXES: .SUFFIXES: .o .frog It is also recommended that you put a line that says SHELL = /bin/sh in every Makefile so that even if it is invoked by somebody who is using a non- standard shell its internal command processing will behave in a standard manner. Again (as you might expect) there are other declarations that can be provided for various specialist uses. I will not even mention them here. Some versions of make provide extra facilities, notably the opportunity to build con- ditions into the file so that different things happen based on the values of macros. Another extension is the ability to reference other files so that it is as if their con- tents had formed part of the original Makefile. I suggest that you avoid any such features even when they do make life a lot easier, at least until you have had sig- nificant experience moving programs from one computer to another. Some people would disagree with me here, perhaps suggesting that whenever you move to a new computeryoushouldfetchandinstallacopyoftheGNUversionofmakeonitsoyou can be certain that all of its capabilities are available. I will re-phrase my advice to suggest that you stick with very plain and simple Makefiles at least until you feel comfortable re-building GNU make from source and installing it on new computers It is well worth using a Makefile as the repository for many more commands than just those to recompile your code. You can usefully put in a target that tidies up by deleting all object and executable files (leaving just the original sources present), ones to run test cases, commands for formatting and printing the manual, scripts that pack up a version of your program for distribution and interfaces to whatever backup/archive discipline you adhere to. The one file can then end up as documen- tationofallthemajorproceduresassociatedwiththemanagement ofyour program: it is perhaps sensible then to make sure it has plenty of informative comments in it. It is perhaps at this stage worth noting the command touch that resets the date on a file to make it look as if it is new. Use of this can sometimes allow you to trick make into assuming that some binary files are new enough that they do not need re-building even though the general rules given suggest otherwise. This can be helpful if you make changes in some source files that you are certain do not really call for re-compilation: eg correction of spelling errors in their comments. The following example is a simple Makefile for a collection of C sources. It uses the makedepend utility to auto-generate the dependencies list and append it to the end of the Makefile. Generating the dependencies in this way avoids obscure bugs that can be caused when hand-entered dependencies are inaccurate and result in some files failing to be compiled when certain include’d files are updated. 8 At least some versions of make appear to require this. 19TARGET = my_prog SRCS = a.c b.c c.c OBJS = (SRCS:.c=.o) CC = gcc CFLAGS = -O2 INCLUDES = -I. LD = ld LDFLAGS = -Bdynamic LIBS = -lm RM = rm -f .c.o: (CC) (CFLAGS) (INCLUDES) -c (TARGET): (OBJS) (LD) (LDFLAGS) -o (OBJS) (LIBS) clean: (RM) .o (TARGET) depend: makedepend (INCLUDES) (SRCS) DO NOT DELETE THIS LINE make depend depends on it. a.o: magic.h magic2.h b.o: magic.h magic2.h c.o: magic2.h 9 rcs and friends The most common cause of corrupted or lost files these days is not liable to be hardware failure, viruses or rogue software. It will be carelessness on the part of the ownerofthefiles. Theproperprotectionagainstill-conceivededitingandfalse-starts towards program upgrades are best based on keeping a fairly detailed incremental record of changes made to all files. Because most changes are rather small these can be stored quite compactly by keeping a base version of each file and a list of changes made to it. The program diff which is discussed later on can compare two versions of a file to generate just such a list of changes. If properly organised such a scheme could have just one file representing a base version of a module, the most recently released fully-tested version and several experimental versions. Any one of the versions stored could be re-created by applying the relevant set of stored edits to the base version of the file. Having got that far it would seem natural to attach commentary to each set of updates to document their author and intent, and to accept the fact that several programmers might be working on just one project, and all of them might be making their own separate changes. With all this in place 20

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.