We have summarized the commands and shortcut keys that are often used in Linux.
The target audience is engineers who are not good at terminals. I wrote it for beginners, but there are some tech-like writing styles. There may be unexpected discoveries even for those who use it everyday.
Major items | What to introduce |
---|---|
Introduction ~ Make command input easier | Tab completion,bash shortcut |
Directory check | pwd, ls, tree |
Hierarchical movement and file operations | cd, mkdir, touch, mv, cp, rm, tar |
Text processing(Filter command) | cat, wc, head, tail, sort, uniq, grep, sed, awk, xargs, less, >, >>(redirect) |
Around the installation | apt, yum, sudo, su, echo, which, wheris, source, ., chmod, chown, systemctl |
Around the OS | date, df, du, free, top, ps, kill, pkill, pgrep, netstat |
Other | find, history, diff, jobs, fg, bg, &(Background execution), &&, $()(Command replacement), <()(Process substitution), $?, for |
The one who decided not to write | vi, make, curl, rsync, ssh-keygen, npm, git |
bonus | nice, sl |
Entering all the commands manually is a pain. Let's make it easier by using Tab completion and bash shortcuts. -> https://www.atmarkit.co.jp/ait/articles/1603/09/news019.html
At a minimum, command names and file names should be completed by hitting the Tab key. There are various bash shortcuts, but personally
Ctrl + A, Ctrl + E
(Move cursor to beginning and end of line)Ctrl + U, Ctrl + K
(Delete to the beginning of the line, Delete to the end of the line) # Useful when you enter the wrong passwordCtrl + W
(delete one word command)Ctrl + L
(clear terminal display)↑, ↓, Ctrl + R
(see command history)Ctrl + C
(stop running command)The frequency of use is high. These are shortcuts provided by a library called readline. # Ctrl + C may be different It can be used with many command line tools, not just bash, so it's useful to remember. (For example, you can use it with python or mysql)
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
pwd | Show the absolute path of the current directory | print working directory |
ls | View files and directories | list |
tree | Show directory structure | directory tree |
# pwd:Show the absolute path of the current folder
arene@~/qiita $ pwd
/home/arene/qiita
# ls:View files and directories in the current folder
$ ls
dir1 file1.txt file2.txt file3.txt
# ls -al:
# * -a:Show hidden files(Origin: all)
# * -l:Show detailed information(Origin: list?)
# *Anyway, use it when you want everything
# *Ls if you have large files-Easy to see with alh
# -h: M(Mega)、G(Giga)To make the size easier to see(Origin:human readable)
# *permission(R or w that appears on the left end)And the owner(arene arene)Often seen when an error occurs during installation
# ->If you want to change each chmod,use chown
# *ll in ls-al and ls-I often see an environment that works the same as l(The keyword is alias(alias).. See the link below)
$ ls -al
total 0
drwxr-xr-x 1 arene arene 4096 Nov 10 18:07 .
drwxrwxrwx 1 arene arene 4096 Nov 10 18:04 ..
-rw-r--r-- 1 arene arene 0 Nov 10 18:07 .hidden_file1.txt
-rw-r--r-- 1 arene arene 0 Nov 10 18:14 dir1
-rw-r--r-- 1 arene arene 4 Nov 10 18:04 file1.txt
-rw-r--r-- 1 arene arene 0 Nov 10 18:02 file2.txt
-rw-r--r-- 1 arene arene 0 Nov 10 18:02 file3.txt
# ls -ltr:Show new files at the bottom
# * -t:Displayed in time stamp order(Origin: time)
# * -r:Display in reverse order(Origin: reverse)
# *Frequently used to find the latest log files
# *The latest is at the bottom, so even if there are many files, it can not be cut off
# *Conversely, if you want to see the oldest file, ls-lt
$ ls -ltr
total 0
-rw-r--r-- 1 arene arene 123 Oct 10 02:30 20191010.log
-rw-r--r-- 1 arene arene 123 Oct 11 02:30 20191011.log
-rw-r--r-- 1 arene arene 123 Oct 12 02:30 20191012.log
-rw-r--r-- 1 arene arene 123 Oct 13 02:30 20191013.log
-rw-r--r-- 1 arene arene 123 Oct 14 02:30 20191014.log
-rw-r--r-- 1 arene arene 123 Oct 15 02:30 20191015.log
-rw-r--r-- 1 arene arene 123 Oct 16 02:30 20191016.log
-rw-r--r-- 1 arene arene 123 Oct 17 02:30 20191017.log
-rw-r--r-- 1 arene arene 123 Oct 18 02:30 20191018.log
-rw-r--r-- 1 arene arene 123 Oct 19 02:30 20191019.log ← Latest file comes to the bottom
$
Digression: alias setting of engineers in the world
# tree:Show directory structure
# * ls -Similar information can be found in R, but it is difficult to see.
# *You can install it with sudo apt install tree or yum install tree
# *Introduced for frequent use in future explanations
$ tree
.
|-- dir1
| |-- dir11
| | |-- file111.txt
| | `-- file112.txt
| |-- file11.txt
| |-- file12.txt
| `-- file13.txt
|-- file1.txt
|-- file2.txt
`-- file3.txt
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
cd | Hierarchical movement(Change current directory) | change directory |
mkdir | Creating a directory | make directory |
touch | File creation, timestamp update | ?? |
mv | Move files and directories | move |
cp | Move files and directories | copy |
rm | Delete file | remove |
tar | File compression and decompression(tar format) | tape archives(← I knew for the first time) |
# cd path:Go to path
arene@~/qiita $ ls
dir1 file1.txt file2.txt file3.txt
arene@~/qiita $ cd dir1/
arene@~/qiita/dir1 $ pwd
/home/arene/qiita/dir1
# cd:Go to the logged-in user's home directory
arene@~/qiita/dir1 $ cd
arene@~ $ pwd
/home/arene
# cd -:Move to the previous directory
# *Useful when switching between two directories that are separated from each other
# * pushd,You can do something similar with the popd command, but I prefer this one
arene@~/qiita/dir1 $ pwd
/home/arene/qiita/dir1
arene@~/qiita/dir1 $ cd
arene@~ $ pwd
/home/arene
arene@~ $ cd -
arene@~/qiita/dir1 $ pwd
/home/arene/qiita/dir1
arene@~/qiita/dir1 $ cd -
arene@~ $ pwd
/home/arene
# cd ~/path:Move to the path under the login user's home directory
# * ~Is read as the logged-in user's home directory(=Tilde deployment)
# * ~If it is xxx, it will be read as the home directory of the xxx user.
arene@~/qiita/dir1 $ cd ~/bin/
arene@~/bin $ pwd
/home/arene/bin
# mkdir directory_name:Create directory(Only one level)
# mkdir -p path/to/directory:Create a deep directory at once
$ ls #Nothing at first
$ mkdir dir1 #Create directory
$ ls
dir1
$ mkdir aaa/bbb/ccc #To create a deep hierarchy at once-p option required
mkdir: cannot create directory ‘aaa/bbb/ccc’: No such file or directory
$ ls
dir1
$ mkdir -p aaa/bbb/ccc
$ tree
.
|-- aaa
| `-- bbb
| `-- ccc
`-- dir1
# touch file_name:Create new file or update time type to current time
# *Originally it is a command to update the time stamp of a file,
#If the specified file does not exist, it will be newly created, so the impression is that it is exclusively a file creation command.
$ touch file1 #Create New
$ ls -l
-rw-r--r-- 1 arene arene 0 Nov 10 10:10 file1
$ touch file1 #Rerun after 5 minutes->Timestamp is updated
$ ls -l
-rw-r--r-- 1 arene arene 0 Nov 10 10:15 file1
# touch --date="YYYYMMDD hh:mm:ss" file_name:Update file time stamp to any time
# *Rarely often used for the purpose of checking the operation related to time
# *Related commands
# date -s "YYYYMMDD hh:mm:ss":OS time change(-s means set)
# (--Probably the origin of the date option)
$ touch --date "20101010 10:10:10" file1
$ ls -l
total 0
-rw-r--r-- 1 arene arene 0 Oct 10 2010 file1
#application:
#You can easily create a large number of test files when combined with brace expansion.
#Brace deployment(Serial number ver): {num1..num2}Bash function that expands to serial numbers from num1 to num2
# (Enumeration ver is the next section(mv)See)
$ touch file{1..3}.txt # -> touch file1.txt file2.txt file3.Expanded to txt
$ ls
file1 file2.txt file3.txt
# mv source/path destination/path:Move files and directories
# mv filename_before filename_after:rename
# *Renaming and moving files are almost the same for OS management
# *When confirmation is noisy-Add f option(f: force /Be careful of erroneous operation!)
$ tree
.
|-- dir1
|-- dir2
`-- file1
$ mv file1 dir1 #Move
$ tree
.
|-- dir1
| `-- file1
`-- dir2
$ mv dir1/file1 dir1/file1.txt #rename
$ tree
.
|-- dir1
| `-- file1.txt
`-- dir2
#application:
#Depending on the directory structure, it can be described concisely when combined with brace expansion.
#Brace deployment(Enumeration ver): cmd {aaa,bbb,ccc}Bash feature expanded to cmd aaa bbb ccc
#* Put a space{aaa, bbb, ccc}Be careful not to
$ tree
.
|-- dir1
| `-- file1.txt
`-- dir2
$ mv dir{1,2}/file1.txt # mv dir1/file1.txt dir2/file1.Expanded to txt
$ tree
.
|-- dir1
`-- dir2
`-- file1.txt
#Application 2:
#cd after mv!$Then you can move the file smoothly to the destination
# *in bash!$Will expand to the last argument of the last command executed
# * !$Is!-1$Alias of.!-2$If so, it will be expanded to the last argument of the command executed two times before.
# * !Represents the expansion of command history.$Is the same image as the regular expression that is the last argument
# * !I don't really like the commands of the system because they are black magic,
# mv -> cd !$I remembered it because it is very convenient to use.(Same for cp)
# * mv -> ls -> cd !-2$I also do well
arene@~/qiita $ ls
dir1 file1
arene@~/qiita $ mv file1 dir1/a/b/c/d/e/f/g/
arene@~/qiita $ cd !$ # !$Is the last argument of the previous command=a/b/c/d/e/f/g/
cd dir1/a/b/c/d/e/f/g/ # !If you use a system command, the expansion result will be output to the standard output.
arene@~/qiita/dir1/a/b/c/d/e/f/g $ ls
file1
Detailed information on history expansion: https://mseeeen.msen.jp/bash-history-expansion/
# cp -r source/path destination/path:Copy files and directories
# * -r:Recursively copy under directory(Origin: recursive)
# * -f:Forced copy without confirmation(Origin: force) <-Be careful of erroneous operation!
# * -p:Preserve permissions before and after copying(Origin: permission)
# *I always add r(It doesn't hurt if there is another, and it is troublesome to use different options depending on the directory or file.)
# *f is a group that does not basically attach because it may stop erroneous operation at the water's edge
# *Pay attention to the presence or absence of p when permissions are important(A group that usually wears by hand)
# *I also often use the similar command scp.
#You can copy files and directories over the network(The format is the same as the alternative)
$ tree
.
|-- dir1
| `-- file1
`-- dir2
$ cp dir1/file1 dir2 #Copy the file to another folder
$ tree
.
|-- dir1
| `-- file1
`-- dir2
`-- file1
$ cp dir1/file1 dir2/file2 #Copy the file to another folder while renaming
$ tree
.
|-- dir1
| `-- file1
`-- dir2
|-- file1
`-- file2
#application:
#When combined with brace expansion, backup file creation etc. can be described concisely
#Brace deployment(Enumeration ver): cmd {aaa,bbb,ccc}Bash feature expanded to cmd aaa bbb ccc
$ ls
important_file
$ cp important_file{,.bak} # cp important_file important_file.Expanded to bak
$ ls
important_file important_file.bak
# rm -f file_name:Delete file
# rm -rf directory_name:Delete directory
# * -f:Forced copy without confirmation(Origin: force)
# * -r:Recursively delete directories and below(Origin: recursive)
# *Unlike Windows, if you erase it, you can not restore it, so be careful
# *Inadvertently "rm-rf /Will remove the entire system, including the OS
#Ctrl in a hurry+I stopped at C, but I can't work properly because the basic command executable file has been deleted.!
# ...It will be like.(A long time ago, the person in the next seat was doing it)
# *Rm in shell script-rf /${DIR_PATH}I wrote${DIR_PATH}Is an empty string
# 「rm -rf /It may be common to say.
# (To prevent such an accident, set in the shell script-It is better to add u(See the link below))
$ ls #initial state
dir1 dir2 dir3 file1.txt file2.txt file3.txt
$ rm -f file1.txt #Delete by specifying the file name
$ ls
dir1 dir2 dir3 file2.txt file3.txt
$ rm -f *.txt #Wildcard(*)Delete all txt files using
$ ls
dir1 dir2 dir3
$ rm -f dir1 #Directory cannot be deleted without r option
rm: cannot remove 'dir1': Is a directory
$ rm -rf dir1 #Delete by specifying the directory name
$ ls
dir2 dir3
$ rm -rf dir* #Wildcard(*)Bulk delete using
$ ls
$
Digression: Set -eu when writing a shell script
# tar -czvf xxx.tgz file1 file2 dir1 :compression(file1 file2 dir1 archived compressed file xxx.Create tgz)
# tar -tzvf xxx.tgz:Show the file name contained in the compressed file(=Deployment test)
# tar -xzvf xxx.tgz:Deployment
# *tar options are hyper confusing
# ...However, I think the above three are enough for normal work
# * c(create), t(test), x(extract) +Remember zvf
# *Archive and compression are different events.
#Archive is to combine multiple files into one. Compression is what reduces file size.
$ ls #initial state
dir1 dir2 file1.txt file2.txt
$ tar czvf something.tgz dir* file* #compression
dir1/
dir2/
file1.txt
file2.txt
$ ls
dir1 dir2 file1.txt file2.txt something.tgz
$ rm -rf dir* file* #Delete the original file once
$ ls
something.tgz
$ tar tzvf something.tgz #See only the contents
drwxr-xr-x arene/arene 0 2019-11-12 00:31 dir1/
drwxr-xr-x arene/arene 0 2019-11-12 00:30 dir2/
-rw-r--r-- arene/arene 0 2019-11-12 01:00 file1.txt
-rw-r--r-- arene/arene 0 2019-11-12 01:00 file2.txt
$ ls
something.tgz
$ tar xzvf something.tgz #Deployment
dir1/
dir1/file1.txt
dir2/
file1.txt
file2.txt
$ ls
dir1 dir2 file1.txt file2.txt something.tgz
#Digression:
#Although tar can be compressed and decompressed, it may return a non-zero exit status.
# (ex.When the file's timestamp is in the future time)
#Take the exit status of tar with a shell script, set-Be careful when e
The real thrill of Linux, text processing. Before moving on to the command description, let's take a look at an overview of text processing.
Command A|Command B|Command C
If you write, it will be the next moveThat's why Let's text processing!
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
cat | Combine file contents and output | concatenate(Join) |
wc | Count the number of words and lines | word count |
head | Output n lines from the beginning | head(lead) |
tail | Output n lines from the end | tail(end) |
sort | Sort by row | sort |
uniq | Eliminate duplication | unique |
grep | Text search | global regular expression print |
sed | String replacement | stream editor |
awk | unix traditional programming language | Initials of 3 developers |
xargs | Convert standard input to command line arguments | execute arguments? |
less | Display standard input like an editor | less is more(Upward compatibility with the more command) |
>, >>(redirect) | Write standard input to a file |
# cat file1:Output the contents of file1 to standard output
# cat file1 file2:Output the contents of file1 to standard output->Output the contents of file2 to standard output
# *Use 1:View a few lines of files with read only
# *Use 2:Check multiple log files at once
# *Often used as a starting point for filtering
$ cat .gitignore #Dump a single file
.DS_Store
node_modules
/dist
$ ls
access.log error1.log error2.log
$ cat error*.log # error1.log and error2.Check logs together
2019/09/14 22:40:33 [emerg] 9723#9723: invalid number of arguments in "root" directive in /etc/nginx/sites-enabled/default:45
2019/09/14 22:42:24 [notice] 9777#9777: signal process started
2019/09/14 22:49:23 [notice] 9975#9975: signal process started
2019/09/14 22:49:23 [error] 9975#9975: open() "/run/nginx.pid" failed (2: No such file or directory)
2019/09/14 22:56:00 [notice] 10309#10309: signal process started
2019/09/14 22:56:10 [notice] 10312#10312: signal process started
2019/09/14 22:56:10 [error] 10312#10312: open() "/run/nginx.pid" failed (2: No such file or directory)
2019/09/14 22:56:22 [notice] 10318#10318: signal process started
2019/09/14 22:56:22 [error] 10318#10318: open() "/run/nginx.pid" failed (2: No such file or directory)
2019/12/07 21:49:50 [notice] 1499#1499: signal process started
2019/12/07 21:49:50 [error] 1499#1499: open() "/run/nginx.pid" failed (2: No such file or directory)
2019/12/07 21:51:19 [emerg] 1777#1777: invalid number of arguments in "root" directive in /etc/nginx/sites-enabled/default:45
# wc -l file1:Count the number of lines in file1(Number of lines+File name appears)
# cat file1 | wc -l:Count the number of lines in file1(Only the number of lines appears)
# * -l:Count the number of lines(Origin: line)
# * -w:Count the number of words(Origin: word)
# * -c:Count the number of bytes(Origin: char? /It seems that it is c because it was an option that was made in the era when it was 1 byte per character)
# *Honest-I've only used l(The number of bytes or ls is enough)
# *Simple and honest-I've only used l(For the number of bytes, ls is enough)
# *When using from a program, the file name is an obstacle, so cat| wc -I think there are many cases where l is used
$ ls
access.log error1.log error2.log
$ wc -l error1.log #Line count(1)
7 error1.log
$ wc -l error2.log #Line count(2)
5 error2.log
$ wc -l error*.log #Count the number of lines in multiple files by specifying a wildcard
7 error1.log
5 error2.log
12 total
$ cat error*.log | wc -l # error1.log and error2.Count the number of lines after combining logs
12
#application:Easily count the number of steps
$ ls
dist src node_modules package.json public tests
$ find src/ -type f | xargs cat | wc -l #Count the total number of lines in all files under src
271
#Commentary
# * find src/ -type f:Output a list of files under src
# * cmd1 | cmd2:Receives the contents output by cmd1 as standard output as standard input and executes cmd2
# * cmd1 | xargs cmd2:Execute cmd2 by receiving the contents output by cmd1 as standard output as "command line arguments"
# * find | xargs cat:Combine all files under src searched by find and output to standard output
# * find | xargs cat | wc -l:Count the number of lines of all the files under src searched by find
# *See also find and xargs
# head -n 3 file1:Output the first 3 lines of file1
# *I don't use it as much as I say, but I will introduce it because it is a command paired with tail that I will introduce next
# *Use 1:Check only the head of heavy files
# *Use 2:Get only the header line of the file and use it from the program
$ cat file1.txt
1 aaa AAA
2 bbb BBB
3 ccc CCC
4 ddd DDD
5 eee EEE
6 fff FFF
7 ggg GGG
8 hhh HHH
9 iii III
10 jjj JJJ
11 kkk KKK
12 lll LLL
13 mmm MMM
$ head -n 3 file1.txt
1 aaa AAA
2 bbb BBB
3 ccc CCC
# tail -n 3 file1:Output the last 3 lines of file1
# *Use 1:See only the end of heavy log files
# *Speaking of tail, the next tail-Mainly how to use f
$ cat file1.txt
1 aaa AAA
2 bbb BBB
3 ccc CCC
4 ddd DDD
5 eee EEE
6 fff FFF
7 ggg GGG
8 hhh HHH
9 iii III
10 jjj JJJ
11 kkk KKK
12 lll LLL
13 mmm MMM
$ tail -n 3 file1.txt
11 kkk KKK
12 lll LLL
13 mmm MMM
# tail -f error.log: error.Monitor log and output updated contents(Origin: feed?)
# *Great success in log confirmation(Since it is the end output, the latest log comes out)
# * tail -It is popular to monitor file updates by f"tail"Say(I've heard it at different sites)
# *Tail the log file and set up a net(Also narrow down with grep as appropriate)
# ->Reproduce the defect
# ->Pinpoint the log that came out when a problem occurred
#How to use.
$ tail -f error.log #Log monitoring
-> Ctrl+Error until stopped by C.Keep waiting for log updates
-> error.When the log is updated, the updated contents are output as they are
$ tail -f error.log | grep 500 #Monitor only logs including 500
-> Ctrl+Error until stopped by C.Keep waiting for log updates
-> error.Output when the log containing 500 in log is updated
# sort file1:Sort file1 by line
# uniq file1:Remove duplicate lines in file1
# cat file1 | sort | uniq:Sort file1 to eliminate duplicate work
# *Since sort and uniq are one set, I will introduce them together.
# * sort ->The order of uniq is important(See example below)
# *The original file does not change(Common properties of filter commands)
#
# *sort is-Reverse sort by r,-Random sort with R, quite a variety of options
# * ls -Sort the execution results of l in order of file size, like
#It is also possible to sort by focusing on a specific column(In this case ls-You can go with lS)
# *But honestly I don't remember
# *If you want to do complicated things, not just sort, you should use a normal programming language
# (I think that it is enough to use it in an easy case such as wanting to eliminate duplicate lines from the csv file at hand.)
$ cat not_sorted_and_not_unique.txt
1 aaa AAA
3 ccc CCC
2 bbb BBB
3 ccc CCC
2 bbb BBB
1 aaa AAA
3 ccc CCC
$ cat not_sorted_and_not_unique.txt | sort
1 aaa AAA
1 aaa AAA
2 bbb BBB
2 bbb BBB
3 ccc CCC
3 ccc CCC
3 ccc CCC
$ cat not_sorted_and_not_unique.txt | sort | uniq
1 aaa AAA
2 bbb BBB
3 ccc CCC
$ cat not_sorted_and_not_unique.txt | uniq | sort # sort ->If you reverse uniq, you will not get the expected result
1 aaa AAA
1 aaa AAA
2 bbb BBB
2 bbb BBB
3 ccc CCC
3 ccc CCC
3 ccc CCC
#Small story:Random number generation
$ echo {1..65535} | sed 's/ /\n/g' | sort -R | head -n 1
11828
#Commentary
# * echo {1..65535}: "1 2 3 4 5 (Omission) 65535"Generate a(keyword:Brace deployment(Above))
# * sed 's/ /\n/g':Replace spaces with newlines
# * sort -R: Random sort by row
# * head -n 1: Display only the first line
# *It takes about 1 second and is not practical at all
# * sort -I was a little happy to know R for the first time and wrote it
# *However, I think the fun of filter commands is that you can connect commands in this way and do various things depending on your ingenuity.
# (It was so-called that I was obsessed with its charm"Shell entertainer")
For those who want to know more: sort command, basics and applications and traps
# grep ERROR *.log:Extract only lines containing ERROR from files with the extension log
# cat error.log | grep ERROR: error.Extract only rows containing ERROR from log
# cat error.log | grep -2 ERROR: error.Output the line containing ERROR and the two lines before and after it from log
# cat error.log | grep -e ERROR -e WARN: error.Extract rows containing ERROR or WARN from log
# cat error.log | grep ERROR | grep -v 400: error.Extract the lines containing ERROR from the log and display the result of excluding the lines containing 400
# * -e:Specify multiple keywords with AND condition(Origin: ??Probably not, but personally French et(=and)I interpret it as)
# * -v:Eliminate lines containing keywords(Origin: verbose??)
# *Great success in the middle and late stages of one liner because of the high demand for text narrowing down
# *You can also use regular expressions
# *Personally cat|Use only grep format(All filter commands are brain dead and cat|Faction to cmd)
$ cat file1.txt
1 aaa AAA
2 bbb BBB
3 ccc CCC
4 ddd DDD
5 eee EEE
6 fff FFF
7 ggg GGG
8 hhh HHH
9 iii III
10 jjj JJJ
11 kkk KKK
12 lll LLL
13 mmm MMM
$ cat file1.txt | grep -e CCC -e JJJ
3 ccc CCC
10 jjj JJJ
$ cat file1.txt | grep -2 -e CCC -e JJJ
1 aaa AAA
2 bbb BBB
3 ccc CCC
4 ddd DDD
5 eee EEE
--
8 hhh HHH
9 iii III
10 jjj JJJ
11 kkk KKK
12 lll LLL
$ cat file1.txt | grep -2 -e CCC -e JJJ | grep -v -e AAA -e BBB -e KKK -e LLL
3 ccc CCC
4 ddd DDD
5 eee EEE
--
8 hhh HHH
9 iii III
10 jjj JJJ
# cat file1 | sed 's/BEFORE/AFTER/g':Replace BEFORE in file1 with AFTER at once
# * s/BEFORE/AFTER/g:Replace BEFORE with AFTER(Origin:substitute and global?)
# * s/BEFORE/AFTER/:Replace the first BEFORE with AFTER
# *even with vi:%s/BEFORE/AFTER/It is convenient to remember because you can replace all at once with g
# (Example of use: git rebase -i HEAD~5 ->vi opens-> :%s/pick/s/Summarize the last 5 commits with g)
# *Not only batch replacement but also deletion and partial replacement can be done, and regular expressions can be used.
# *The original file does not change(-It is also possible to overwrite with the i option)
# *The only thing I can use quickly is batch replacement, but the image that shell entertainers make full use of sed and awk
$ cat typo.txt #File with misspelling
Hello Wolrd!
Wolrd Wide Web
$ cat typo.txt | sed 's/Wolrd/World/g' #Correct spelling mistakes
Hello World!
World Wide Web
$ cat typo.txt | sed 's/Wolrd/World/g' > typo_fixed.txt #Save the corrected result in another file
$ cat typo_fixed.txt
Hello World!
World Wide Web
For those who want to know more: How to write in this case with sed?
# cmd1 | awk '{print $5}':From the cmd1 execution result, only the 5th column is displayed separated by spaces
# cmd1 | awk -F ',' '{print $5}':From the cmd1 execution result, display only the 5th column separated by commas
# *King of One Liner(Personal opinion)
# *You can use if, for, and variables, and the classification is a programming language rather than a command.
# *If you want to extract the nth column of the 〇〇 delimiter, it's a confusing language
# *In the context of awk"nth column"Not"nth field"Call
# *I can't write, but I can only read the atmosphere
# (I've seen a legacy project I encountered in the past, parsing a TSV in its own format with awk and converting it to XML.)
# (↑ I'm not in the current position!)
# *It is rarely used, so it may be good to read it as a culture.
$ ls -l
total 0
drwxr-xr-x 1 arene arene 4096 Feb 4 22:40 abc
drwxr-xr-x 1 arene arene 4096 Feb 4 22:40 def
-rw-r--r-- 1 arene arene 134 Feb 4 22:50 file1.txt
arene@~/qiita/src $ ls -l | awk '{print $5}' #Show only the 5th column
4096
4096
134
# cmd1 | xargs cmd2:Receive the execution result of cmd1 as a command line argument and execute cmd2
# * cmd1 |Whereas cmd2 receives the execution result of cmd1 as "standard input" and executes cmd2.
# cmd1 |xargs cmd2 receives the execution result of cmd1 as a "command line argument" and executes cmd2
# *My personal favorite command(If you use it well, you will feel smarter)
# *I'm used to it, but I have the impression that there are quite a few requirements that I can't do without this one liner.
# *Often used after find(Batch operation for multiple files)
# *It's an advanced command, and I couldn't find an easy-to-understand and practical example.
$ ls -1
src
test
$ ls -1 | echo #echo doesn't accept standard input so nothing comes out
$ ls -1 | xargs echo #It can be displayed by passing it as an argument with xargs
src test
$ ls -1 | xargs -n 1 echo # -If n 1 is added, it will be passed line by line, so it will be echoed line by line.
src
test
#application:Batch rename of multiple files
# find -type f dir_name | xargs -I{} mv {} {}.bak: dir_For all files under name.Add bak
# *In this example it should be easier to use the rename command(I don't use it personally because I'm not familiar with rename)
# *If you want to use the received string more than once, such as mv or cp-I{}Use options
$ tree #Initial state
.
|-- src
| |-- main.js
| `-- sub.js
`-- test
|-- main.js
`-- sub.js
$ find test/ -type f #Find the relative paths of files under test
test/main.js
test/sub.js
$ find test/ -type f | xargs -I{} mv {} {}.test
#Expanded to the following contents(-I{}And after that{}Will be replaced with the input contents)
# mv test/main.js test/main.js.bak
# mv test/sub.js test/sub.js.bak
$ tree
.
|-- src
| |-- main.js
| `-- sub.js
`-- test
|-- main.js.test
`-- sub.js.test
$ find test/ -type f | sed 's/js.test/js/g' | xargs -I{} mv {}.test {} #Undo
# test/main.js.test with sed test/main.Replace with js and pass to mv
$ tree
.
|-- src
| |-- main.js
| `-- sub.js
`-- test
|-- main.js
`-- sub.js
# less file1:View file1(read only)
# cat file1 | cmd1 | cmd2 | less:See the result of processing file1 in various ways
# *A command to use when you want to see something without outputting to the terminal
# *Upward compatibility with similar command more(less is more!)
# *Read only so safe and secure(I don't want to change it, but let's stop watching it on vi)
# *Some key bindings of vi can be used
# gg:Move to first line
# G:Move to the last line
# /pattern:Search in file with pattern
# q:close
# *Tail with F-You can do the same thing as f, and contrary to the name, it is quite sophisticated
(See the link below for an example)
For those who want to know more: -You gradually want to use the less command to read the file ・ 11 less command tips that engineers should know
# cmd1 >> file1:Write the execution result of cmd1 to file1(Postscript)
# cmd1 > file1:Write the execution result of cmd1 to file1(Overwrite)
$ cat file1.txt #Before redirect
1 aaa AAA
2 bbb BBB
3 ccc CCC
$ echo "4 ddd DDD" >> file1.txt #redirect(Postscript)
$ cat file1.txt
1 aaa AAA
2 bbb BBB
3 ccc CCC
4 ddd DDD
$ echo "4 ddd DDD" > file1.txt #redirect(Overwrite)
$ cat file1.txt
4 ddd DDD
# echo "echo login!" >> ~/.bashrc:Add settings to the end of bashrc * Please do not actually hit!
# * .bashrc is a bash config file
# * >>Not>If you use, it will be overwritten and the settings will be erased, so be careful
# *Use redirects when editing configuration files in procedure manuals and environment construction automation scripts
# (When building the environment manually>>When>のうっかりミスが怖いため、普通にファイルを開いた方がいいWhen思う)
# something.sh > log.txt: something.execution result of sh(Standard output)Log output
# something.sh > log.txt 2>&1: something.execution result of sh(Standard output+Standard error output)Log output
# something.sh >/dev/null 2>&1: something.Do not output the execution result of sh anywhere
$ cat something.sh #A shell script that outputs a message to the standard output on the first line and the standard error output on the second line.
#!/bin/bash
echo standard output
syntax-error!!!! # standard error
$ ./something.sh > log.txt #If you simply redirect, only standard output will be redirected
./something.sh: line 3: syntax-error!!!!: command not found
$ cat log.txt
standard output
$ ./something.sh > log.txt 2>&1 # 2>&Add 1 to redirect both
$ cat log.txt
standard output
./something.sh: line 3: syntax-error!!!!: command not found
$ ./something.sh >/dev/null 2>&1 #Make sure nothing comes out(The so-called "throw to fat null")
$
$ ./something.sh 2>&1 >/dev/null #If you reverse it, it will not work.
./something.sh: line 3: syntax-error!!!!: command not found
#Commentary
# * 1:Standard output 2:Standard error output
# * /dev/null:Trash-like, abyssal, special empty file prepared by the OS
# * > log.txt is 1>log.Same as txt, standard output is logged
# * > log.txt 2>&1 is 2(Standard error output)1(Standard output)Destination(=logfile)Towards
# * 2>&1 > log.The reason why txt does not work is because ↓ 2 are executed sequentially
# (1) 2>&1:Standard error output, default standard output(stdout)Switch to
# (2) > log.txt2:Direct standard output to log files
# =>As a result, the standard error output is output to the default standard output, and the standard output is output to the log file.
For those who want to know more: Let's remember it. The meaning of command> / dev / null 2> & 1
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
apt, yum | Command installation | Advanced Package Tool, Yellowdog Updater Modified |
sudo | Execute command with root privileges | superuser do(substitute user do) |
su | User switching | substitute user |
echo | Display of character string | echo |
env | Display environment variables | environment |
which, whereis | Find the location of the command | which, where is |
source, . | Reflection of settings(Run file contents in current shell) | source |
chmod | Change file and directory permissions | change mode |
chown | Change the owner of files and directories | change owner |
systemctl | Service start, stop, etc. | system control |
# apt install git:install git(Debian-based OS such as Ubuntu)
# yum install git:install git(Red Hat OS such as CentOS)
# *Often run in sudo apt ~ format
# *If you execute it and a message such as "Permission Denied" or "Insufficient authority" appears, sudo for the time being(Miscellaneous)
# (Or chown,Set permissions properly with chmod)
$ sudo apt install git
(A lot of messages come out, but omitted)
# sudo cmd1:Run cmd1 as root user
# *Frequently used on Ubuntu(Ubuntu follows because of the idea that you should not work as root)
# *In CentOS, I will switch to the root user at the next su, so I do not use it much
# *It may not be possible for some users
# (Limiting the users who can sudo in a secure environment)
$ sudo vi /etc/hosts #Edit config file that only root can change
[sudo] password for arene:
$
# su user1:Switch to user1(Environment variables inherit the current ones)
# su - user1:Switch to user1(Discard the current environment variable and use the user1 default environment variable)
# su -:Switch to root user(Discard the current environment variable and use the root user default environment variable)
# *I always hyphenate
# (Intention to prevent unexpected mistakes by inheriting environment variables)
# *Su on centOS-oracle or su-I used to use postgres
$ su -
Password:
#
# echo abc:Output string abc
# echo $PATH:Output environment variable PATH
# *Use 1:Output usage and error message in shell script
# *Use 2:Checking environment variables
# *If you execute a command and get "command not found", it is usually not in the PATH, so let's go through the PATH first.
$ echo abc
abc
$ echo $LANG
UTF-8
Supplementary information: What is an environment variable when passing through PATH
# env | less:Check environment variables
# *You can see it only with env, but if there are many environment variables, it will be cut off, so check with less
# ->With less open/PATH"PATH"You can search with
# which cmd:Show where the cmd entity is located
# whereis cmd:A little detailed version of which
# *To be honest, I only use which
# *I've installed multiple versions of node, where is the entity that's working now??
#I want to delete unnecessary commands, but where is this one??
#Or use it in such cases
$ which ls
/bin/ls
$ ls
access.log error1.log error2.log src
$ /bin/ls
access.log error1.log error2.log src
# source ~/.bashrc: .Reload bashrc
# . ~/.bashrc:Same as ↑(.Is an alias for source)
# *100 cases used for reloading after changing the shell configuration file%(Self-examination)
# *For the time being, you can also execute a shell script
#
# *source is a command to execute the file specified by the argument in the "current shell"
# *When a command or shell script is executed normally, the process is executed in "another newly generated shell".
# (Variables in the current shell are now clean)
# *On the other hand, since source processes in the current shell,
#Environment variables and alias settings changed during processing are inherited even after execution is completed.(=The settings are reflected)
$ env | grep MY_ENV # before
$ echo "export MY_ENV=abc" >> ~/.bashrc #Add the appropriate environment variables
$ env | grep MY_ENV #Not yet reflected
$ . ~/.bashrc #Reload bashrc with source
$ env | grep MY_ENV #The environment variables set in ↑ are reflected
MY_ENV=abc
# chmod 755 *.sh:Grant execute permission to sh file
# chmod 644 *.js:Set js file to read / write normally
# *The mysterious numbers also make sense, but honestly I only use 644 and 755
# *You can set w or r or letters, but I'm a number sect
# *When you run the program and get "Permission denied", you usually just don't have execute permission, so you can change it to 755.
#
#To explain
# *Arranging three numbers like 755 specifies the following three
# [Authority on the owner][Authority on owning group][Privileges for others]
# *The meanings of the numbers are as follows
# 0:No authority
# 1:Execution authority
# 2:Write permission
# 4:Read permission
# (7=1+2+4 is all OK, 6=2+Only read and write at 4 and so on)
# *So 755 is"The owner can do anything, others can only read and write"Setting
#644 is"Owners can read and write, others can only read"Setting
$ ls -l # before
total 0
-rw-r--r-- 1 arene arene 0 Feb 8 23:26 abc
$ chmod 755 abc #Grant execute permission
$ ls -l
total 0
-rwxr-xr-x 1 arene arene 0 Feb 8 23:26 abc
$ chmod 644 abc #Eliminate execute permission
$ ls -l
total 0
-rw-r--r-- 1 arene arene 0 Feb 8 23:26 abc
#application:Bulk change
# find dir1 -type f | grep sh$ | xargs chmod 755:Grant execute permission to all shs under dir1
# *Find the relative path of a file with find->Find files ending in sh with grep->Chmod the file you're looking for
# * find dir1 -type f -name "*.sh" |Same for xargs chmod 755
For those who want to know more: Linux permission check and change (chmod) (for super beginners)
# chown user1:group1 file1:Change the owner of file1(User1,Set group to group1)
# find dir1 | xargs chown user1:group1:Change the owner of all files under dir1 at once
# *Check the user list is cat/etc/passwd
# (Since the first half is a user with various middleware added, it is usually enough to look at the last few lines)
# *Check the group list with cat/etc/group (Same as above)
@@TODO:Concrete example(I didn't have an environment with multiple users)
# *A service is a program that runs in the background, such as a firewall or web server.
# (Also called a daemon)
# *systemctl is a command to start and stop services
# *This type of command is quite different depending on the environment, but as of 2020, the new environment is systemctl
# *Older linux is divided into service command and chkconfig command
# *It doesn't seem to be on a Mac(Since I just made my debut recently, I have no knowledge)
# *Of course, you can also register your own program in the service.
#Start, stop, check the current status
# *Note that if you just start it, it will stop when you restart the OS.
# *Service name can be completed with Tab key
systemctl status service1 #Check the status of service1(Check if you are alive or dead)
systemctl start service1 #Start service1
systemctl stop service1 #Stop service1
systemctl restart service1 #Reboot service1(Stop->Start-up)
#Auto start settings
# * enabled:Automatically starts when the OS starts
# * disabled:Does not start automatically when the OS starts
systemctl list-unit-files #List of services+Show whether to start automatically
systemctl enable service1 #Make service1 start automatically
systemctl disable service1 #Prevent service1 from starting automatically
systemctl is-enabled service1 #Check if service1 is set to start automatically
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
date | Check and set the time | date |
df | Check free disk space | disk free |
du | Check the size of the directory | disk usuage |
free | Check memory availability | free |
top | Check CPU and memory usage | ?? |
ps | Confirmation of process information | process status |
kill | Stop the process by specifying the PID(Send a signal) | kill |
pkill | Stop all processes with the specified process name at once | process kill? |
pgrep | Shows the PID of the process with the specified process name | pid grep |
netstat | View network status | network status |
# date:Show present tense
# date '+%Y%m%d %H:%M:%S': YYYYMMDD hh:mm:Display present tense in ss format
# date -s "YYYYMMDD hh:mm:ss":Change OS time(Origin: set)
# *I use it once in a while, and every time I use it, what is the format??The one who becomes
# * date -s "YYYYMMDD hh:mm:ss"(OS time change)And touch-d "YYYYMMDD hh:mm:ss"(Change file timestamp)
#Is the same, so I try to remember this much and search each time.
$ date
Sun Feb 9 11:00:41 JST 2020
$ date '+%Y%m%d %H:%M:%S'
20200209 11:01:13
$ date -s "20200209 11:02:00"
Sun Feb 9 11:02:00 JST 2020
For those who want to know more: Frequent date specification pattern of date command
# df -h:Disk usage/Display free space with units(Origin: human readable)
# df:Disk usage/Show free space
# *Basically-See in h
# *h rounds the value, so if you want to know the exact value, hit it without options
# *You can also see what the filesystem is and where it is mounted
# *↓ is seen on Ubuntu of WSL, so there is a C drive, it is a little strange
@@TODO:Replace with the execution result on plain Ubuntu
$ df -h # Use%What%Are you using it? How much Used and Avail are using and whether they are free
Filesystem Size Used Avail Use% Mounted on
rootfs 230G 199G 31G 87% /
none 230G 199G 31G 87% /dev
none 230G 199G 31G 87% /run
none 230G 199G 31G 87% /run/lock
none 230G 199G 31G 87% /run/shm
none 230G 199G 31G 87% /run/user
cgroup 230G 199G 31G 87% /sys/fs/cgroup
C:\ 230G 199G 31G 87% /mnt/c
E:\ 223G 141G 83G 63% /mnt/e
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 240312316 207873316 32439000 87% /
none 240312316 207873316 32439000 87% /dev
none 240312316 207873316 32439000 87% /run
none 240312316 207873316 32439000 87% /run/lock
none 240312316 207873316 32439000 87% /run/shm
none 240312316 207873316 32439000 87% /run/user
cgroup 240312316 207873316 32439000 87% /sys/fs/cgroup
C:\ 240312316 207873316 32439000 87% /mnt/c
E:\ 233322492 146962124 86360368 63% /mnt/e
# du -h:Display the capacity of each directory with a unit(Origin: human readable)
# du:Show the capacity of each directory
# *I can't see the size of the directory in ls
# *Use du when you want to see the actual size
# *When there are many subdirectories and it is hard to see, grep or less as appropriate
$ ls -lh #With ls, the directory is uniform 4.Displayed at 0K, I do not know the actual size
total 0
drwxr-xr-x 1 arene arene 4.0K Oct 14 08:53 dist
-rw-r--r-- 1 arene arene 0 Jan 1 10:10 file1.txt
drwxr-xr-x 1 arene arene 4.0K Oct 14 09:11 src
$ du -h #If you look at df, you can see the actual size
0 ./dist/css
8.0K ./dist/img
888K ./dist/js
908K ./dist
8.0K ./src/assets
4.0K ./src/components
4.0K ./src/pages
16K ./src
924K .
# free -h:Display memory usage with units(Origin: human readable)
# free:Show memory usage
# *It seems that the displayed contents are slightly different depending on the OS(New guy gets available)
# *Although I introduced it, I am not confident in the view that it is OK if you look at it like this. ..
# (There is no problem if free and available are large to some extent, recognition)
# *If you know the right way to determine if you're running out of memory, please leave a comment.
$ free -h
total used free shared buff/cache available
Mem: 7.9G 6.8G 886M 17M 223M 980M
Swap: 24G 1.1G 22G
$ free
total used free shared buff/cache available
Mem: 8263508 7099428 934728 17720 229352 1030348
Swap: 25165824 1149132 24016692
# top:Check CPU and memory usage
# *By default, processes with high CPU usage come up
# * %CPU is CPU usage. You can see which process is overloaded.
# *I often see the load average in the upper right
#High load state when the value exceeds the number of CPU cores(With dual core, 2 or more is high load)
#The number of cores of cpu is cat/proc/Can be confirmed with cpuinfo
# *However, even if the load average is 3 with 4 cores
#It is possible that tasks are concentrated on core1 and only core1 is in a high load state.
#If the load average of ↑ is less than the number of cores, it is OK, but it is just a guide
$ top
top - 12:06:17 up 87 days, 11:55, 0 users, load average: 0.52, 0.58, 0.59
Tasks: 13 total, 1 running, 12 sleeping, 0 stopped, 0 zombie
%Cpu(s): 10.2 us, 8.0 sy, 0.0 ni, 81.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem : 8263508 total, 1821072 free, 6213084 used, 229352 buff/cache
KiB Swap: 25165824 total, 23985072 free, 1180752 used. 1916692 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5310 arene 20 0 17620 2052 1516 R 1.0 0.0 0:00.18 top
1 root 20 0 8896 172 136 S 0.0 0.0 0:00.21 init
74 root 20 0 19464 504 448 S 0.0 0.0 0:00.01 sshd
1862 root 20 0 57560 344 312 S 0.0 0.0 0:00.01 nginx
1863 www-data 20 0 58204 1036 904 S 0.0 0.0 0:00.19 nginx
1865 www-data 20 0 58204 1036 920 S 0.0 0.0 0:00.07 nginx
1868 www-data 20 0 58204 1036 904 S 0.0 0.0 0:00.01 nginx
1869 www-data 20 0 58204 948 856 S 0.0 0.0 0:00.00 nginx
1920 root 20 0 8904 224 176 S 0.0 0.0 0:00.01 init
1921 arene 20 0 17332 4032 3896 S 0.0 0.0 0:00.32 bash
1996 root 20 0 20220 4204 4056 S 0.0 0.1 0:00.17 sshd
2069 arene 20 0 20488 2092 1956 S 0.0 0.0 0:05.02 sshd
2070 arene 20 0 18828 5628 5520 S 0.0 0.1 0:11.96 bash
For those who want to know more: ・ Viewing road average in the multi-core era -How to use the top command
# ps -ef:See detailed information on all processes(Origin: every, full)
# *Use 1:Check if a process is alive(The web server is running?)
# *Use 2:PID of a process(Process ID)check-> kill ${PID}
# *I'm sure you can see many other things, but I don't know much
# *For historical reasons, the options are divided into two systems, which is super complicated, but I-Only use ef
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 2019 ? 00:00:00 /init ro
root 74 1 0 2019 ? 00:00:00 /usr/sbin/sshd
root 1862 1 0 2019 ? 00:00:00 nginx:
www-data 1863 1862 0 2019 ? 00:00:00 nginx:
www-data 1865 1862 0 2019 ? 00:00:00 nginx:
www-data 1868 1862 0 2019 ? 00:00:00 nginx:
www-data 1869 1862 0 2019 ? 00:00:00 nginx:
root 1920 1 0 Feb04 tty1 00:00:00 /init ro
arene 1921 1920 0 Feb04 tty1 00:00:00 -bash
root 1996 74 0 Feb04 ? 00:00:00 sshd: arene [priv]
arene 2069 1996 0 Feb04 ? 00:00:04 sshd: arene@pts/0
arene 2070 2069 0 Feb04 pts/0 00:00:11 -bash
arene 5090 2070 0 11:13 pts/0 00:00:00 ps -ef
# kill 123:Stop the process with process ID 123(Send SIGTERM)
# kill -9 123:Kill a process with process ID 123 without question(9 is the SIGKILL signal number)
# kill -KILL 123: -Same as 9
# *As the name suggests, 99 cases are used to kill the process.%
# *However, to be precise, it is a command that sends an arbitrary signal to a specific process.
#SIGTERM by default(Origin: terminate)Is sending
# *A signal is a signal that the OS sends to each process to instruct interrupt processing.
# (For example Ctrl+SIGINT when terminating a command in C(Origin: interrupt)Has been sent.
#Since it is interrupt processing, even a program that loops infinitely can be terminated.)
# * SIGKILL(9)Is the most powerful signal, and if you send it, you can kill the process without asking questions.
# *SIGKILL is quite rough, so it's a last resort. Let's try MUJI kill.
# (With SIGKILL, even the termination process is not allowed and it is cut off, so in rare cases an inconvenience may occur at the next startup.)
$ ps -ef | grep eternal_loop | grep -v grep #Find out the PID of a properly written infinite loop program
arene 5500 2070 0 13:00 pts/0 00:00:00 ./eternal_loop
$ kill 5500 #Specify pid and kill
[1]+ Terminated ./eternal_loop
$ ps -ef | grep eternal_loop | grep -v grep #Confirm that it was killed
$
More information: Basics of Linux Signals SIGNAL Man Page
# pkill process_name_prefix: process_name_Terminate all processes starting with prefix
# pkill -9 process_name_prefix: process_name_End all processes starting with prefix without asking questions
# *For signals, see the previous section(kill)See
# *All hits will be killed, so ps before execution-ef | grep process_name_with prefix
#It is better to check the target process
$ ps -ef | grep eternal_loop | grep -v grep #There are many processes that loop infinitely
arene 5558 2070 0 13:13 pts/0 00:00:00 ./eternal_loop
arene 5562 2070 0 13:13 pts/0 00:00:00 ./eternal_loop2
arene 5566 2070 0 13:13 pts/0 00:00:00 ./eternal_loop3
arene 5570 2070 0 13:13 pts/0 00:00:00 ./_bak_eternal_loop
$ pkill eternal_loop #Kill all at once with pkill
[1] Terminated ./eternal_loop
[2] Terminated ./eternal_loop2
[3]- Terminated ./eternal_loop3
$ ps -ef | grep eternal_loop | grep -v grep #Three prefix-matched matches were killed
arene 5570 2070 0 13:13 pts/0 00:00:00 ./_bak_eternal_loop
# pgrep process_name_prefix: process_name_Output PIDs for all processes starting with prefix
# *Mainly used when you want to dynamically extract and use PID with a shell script or one liner
# * $()(Command replacement)Often combined with or xargs
$ ps -ef | grep eternal_loop | grep -v grep #Lots of processes that loop infinitely
arene 5570 2070 0 13:13 pts/0 00:00:00 ./_bak_eternal_loop
arene 5590 2070 0 13:18 pts/0 00:00:00 ./eternal_loop
arene 5594 2070 0 13:18 pts/0 00:00:00 ./eternal_loop2
arene 5598 2070 0 13:18 pts/0 00:00:00 ./eternal_loop3
$ pgrep eternal_loop #Extract process ID
5590
5594
5598
$ pgrep eternal_loop | xargs kill #Kill the extracted process
[5] Terminated ./eternal_loop
[6]- Terminated ./eternal_loop2
[7]+ Terminated ./eternal_loop3
$ ps -ef | grep eternal_loop | grep -v grep #Death confirmation
arene 5570 2070 0 13:13 pts/0 00:00:00 ./_bak_eternal_loop
# netstat -anp| less:Check network status
# * -a:Show all connections(Origin: all)
# * -n:Display raw IP address and port number without name resolution(Origin: number?)
# * -p:Show process ID(Origin: process)
# *You can see many things, but I-Only use anp
# * LISTENING, ESTABLISHED, TIME_It is nice to be able to check the status of each port such as WAIT
@@TODO:Paste the execution result on ubuntu(I couldn't see it in my WSL environment)
More information: Mastering the "netstat" command to check the status of TCP / IP communication
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
find | Find files and directories(Path output) | find |
history | Command history reference | history |
diff | Difference confirmation | difference |
jobs | Check running jobs | jobs |
bg | Move the specified job to the background | background |
fg | Move the specified job to the foreground | foreground |
& | Background execution | |
&&, | ||
$(), <() | Command substitution, process substitution | |
$? | Check the exit status of the previous command | |
for | Loop processing |
# find dir1 -type f:Display a list of files under dir1
# find dir1 -type f -name "*.js":Display a list of js files under dir1
# find dir1 -type d:Display a list of directories under dir1
# *With abundant options, search down to n levels, search only files older than a specific date and time,
#You can search for files with specific permissions, etc.
# *But I forget it, so I can write this much quickly
# *Unlike ls, the file path is output, so find xxx| xargs rm -Suitable for batch operation like rf
$ find src/ -type f
src/App.vue
src/assets/logo.png
src/components/HelloWorld.vue
$ find src/ -type f -name "*.png "
src/assets/logo.png
$ find src/ -type d
src/
src/assets
src/components
For those who want to know more: [12 usages to remember with the find command](https://orebibou.com/2015/03/find%E3%82%B3%E3%83%9E%E3%83%B3 % E3% 83% 89% E3% 81% A7% E8% A6% 9A% E3% 81% 88% E3% 81% A6% E3% 81% 8A% E3% 81% 8D% E3% 81% 9F% E3 % 81% 84% E4% BD% BF% E3% 81% 84% E6% 96% B912% E5% 80% 8B /)
# history | less:Check command history
# *Use 1:I built a messy environment and the result went well, but what happened after all??To find out
# *Use 2:Investigate the use of the mysterious server that enters for the first time
# *Use 3:Find and reuse that command that you use a lot but it's too long to remember
# *Environment variable HISTSIZE can specify the number of history to hold
#The default value is usually small, so if you increase it, you may be happy when something happens.
# *If you type the password for ssh or DB connection directly, you will see the history and it will be pulled out, so be careful.
# *Ctrl if you just want to reuse the command+R is recommended.
#Adding fzf makes it much easier to use.
$ history | tail
3325 find src/ -type f
3326 find src/ -type d
3327 find src/ -type f -name "*.png "
3328 find src/ -type d | xargs ls
3329 find src/ -type d | xargs ls-l
3330 find src/ -type d | xargs ls -l
3331 find src/ -type d | xargs -n 1 ls -l
3332 find src/ -type d -ls
3333 find src/ -type f -ls
3334 history | tail
$ echo $HISTSIZE
10000
# diff file1 file2:Show the difference between file1 and file2
# diff -r dir1 dir2:Show the difference between dir1 and dir2(Also check subdirectories)
# *It is often used when you want to check whether there is a difference in environment construction etc.
# *If you want to compare the difference contents firmly, you should use WinMerge or Meld or difference comparison software.
$ ls
dist src
$ cp -pr src/ src2 #Copy and see the difference=>No difference(Natural)
$ diff -r src src2
$ echo "abc" >> src2/App.vue #Make a difference on purpose and see the difference
$ diff -r src src2
diff -r src/App.vue src2/App.vue
17a18
> abc
#Advanced version:Compare the sorted results
$ cat unsort1.txt #A file in which 1 to 5 are randomly arranged
1
5
2
4
3
$ cat unsort2.txt #File in which 1 to 5 are randomly arranged Part 2
1
2
3
5
4
$ diff <(cat unsort1.txt | sort) <(cat unsort2.txt | sort) #No difference when comparing the sorted results
$
$ diff $(cat unsort1.txt | sort) $(cat unsort2.txt | sort) #Similar but error in command substitution
diff: extra operand '3'
diff: Try 'diff --help' for more information.
#Commentary
# * <(cmd):Treat the execution result of cmd1 as the input of another command(Process substitution)
# * $(cmd):Expand the execution result of cmd1 as a character string(Command replacement)
# *Process Substitution allows you to compare the results of sorting two files in one liner
# *If you don't use process substitution, it will be spit out to another file...It's rather annoying because I'll do it
# *Useful for comparing csv, etc.
# * <()Is treated as a file, while$()Is expanded as a string in the command.
#Process substitution is suitable for commands that take a file as an argument, such as diff.
# jobs:Display a list of jobs running in the background
# fg 1:Switch job 1 to foreground execution
# bg 1:Switch job 1 to background execution
# *Used to return a program that was accidentally executed in the background to the foreground
# *The representative case is Ctrl with vi+When you press Z
# (vi Ctrl while editing+For some beginners, pressing Z will stop the job and you won't know where you are.
#In such a case, calm down jobs->OK with fg)
# *If you inadvertently do something that should be done in the background,
# Ctlr+Stop at Z, jobs->Switch to background execution with bg
# (...But always Ctrl+Stop at C&I haven't used it because I will re-execute it.)
$ ./eternal_loop1 & #Running a program that loops infinitely in the background
[1] 5906
$ ./eternal_loop2 &
[2] 5910
$ ps -ef | grep eternal_loop | grep -v grep
arene 5906 2070 0 18:29 pts/0 00:00:00 ./eternal_loop1
arene 5910 2070 0 18:29 pts/0 00:00:00 ./eternal_loop2
$ jobs #If you look at jobs, you can see that there are two running in the background
[1]- Running ./eternal_loop1 &
[2]+ Running ./eternal_loop2 &
$ fg 2 #Switch job number 2 to foreground
./eternal_loop2
^C #Ctrl does not end in an infinite loop+End with C
$ jobs #Confirm that job number 2 has finished
[1]+ Running ./eternal_loop1 &
$ ps -ef | grep eternal_loop | grep -v grep
arene 5906 2070 0 18:29 pts/0 00:00:00 ./eternal_loop1
# cmd1:Run cmd1 in the foreground
# cmd1 &:Run cmd1 in the background
# *For heavy batch processing or when you want to run the web server temporarily
#Convenient to run commands in the background(Of course, you can start another terminal.)
# *next&&Or redirect 2>&Easy to confuse with 1, but different
# *There is no choice but to get used to the symbol system around here
$ ./eternal_loop1 & #Running a program that loops infinitely in the background
[1] 6104
$ echo 123 #Since it was executed in the background, other commands can be used.
123
# cmd1 && cmd2:If cmd1 succeeds, execute cmd2(If cmd1 fails, it ends there)
# cmd1 || cmd2:If cmd1 fails, run cmd2(If cmd1 succeeds, it ends there)
# *Use 1:Write a little sequential processing with one liner
# *Use 2: cmd1 || echo "error message"
# *I don't see any practical examples, but I use them a little and see them.
##Cases where both succeed
$ echo aaa && echo bbb
aaa
bbb
$ echo aaa || echo bbb
aaa
##Cases where both fail
$ echoooo aaa && echoooo bbb
echoooo: command not found
$ echoooo aaa || echoooo bbb
echoooo: command not found
echoooo: command not found
# echo ${var1}:Output the contents of variable var1(Variable expansion)
# echo $(cmd1):Output the execution result of cmd1(Command replacement)
# echo `cmd1`:↑ almost the same(Command replacement(Old notation))
# diff <(cmd1) <(cmd2):Output the execution result of cmd1 and cmd2(Process substitution)
# * ${}When$()は混同しやすい。jsのtemplateリテラルWhen同じ奴が変数置換
# * $()Is``New notation.$(cmd1 $(cmd2))The feature is that it is easy to nest like
#Often combined with dynamically changing content such as date or pgrep.
# * <()Is a shell script or one-liner, when you want to use a temporary file.
#cat and diff,Combine with commands that use file contents, such as wihle read line.
$ cat lsByOption.sh #I didn't come up with a good example of one-liner, so I prepared a sloppy shell script.
#!/bin/bash
OPTION=$1
ls $(echo ${OPTION}) #The first argument is-If l, ls-become l
$ ls #Run ls normally
lsByOption.sh unsort1.txt unsort2.txt
$ ./lsByOption.sh -l # ls $(echo ${OPTION})Is ls-become l
total 0
-rwxr-xr-x 1 arene arene 45 Feb 9 19:44 lsByOption.sh
-rw-r--r-- 1 arene arene 10 Feb 9 19:29 unsort1.txt
-rw-r--r-- 1 arene arene 10 Feb 9 19:30 unsort2.txt
$ ./lsByOption.sh -al # ls $(echo ${OPTION})Is ls-become al
total 0
drwxr-xr-x 1 arene arene 4096 Feb 9 19:44 .
drwxr-xr-x 1 arene arene 4096 Feb 9 19:28 ..
-rwxr-xr-x 1 arene arene 45 Feb 9 19:44 lsByOption.sh
-rw-r--r-- 1 arene arene 10 Feb 9 19:29 unsort1.txt
-rw-r--r-- 1 arene arene 10 Feb 9 19:30 unsort2.tx
For those who want to know more: -Glue that connects commands -[Use bash's process substitution function to streamline shell work and script writing](https://sechiro.hatenablog.com/entry/2013/08/15/bash%E3%81%AE%E3 % 83% 97% E3% 83% AD% E3% 82% BB% E3% 82% B9% E7% BD% AE% E6% 8F% 9B% E6% A9% 9F% E8% 83% BD% E3% 82 % 92% E6% B4% BB% E7% 94% A8% E3% 81% 97% E3% 81% A6% E3% 80% 81% E3% 82% B7% E3% 82% A7% E3% 83% AB % E4% BD% 9C% E6% A5% AD% E3% 82% 84% E3% 82% B9)
# echo $?:Shows the exit status of the last command
# *Used when writing abnormal processing in a shell script?
# *I mentioned it with momentum, but I may not use it much
$ echo 123 #OK case
123
$ echo $?
0
$ hdskds #NG case
hdskds: command not found
$ echo $?
127
# for i in {1..10} ; do cmd1; done:Repeat cmd1 10 times
# *I want to use it often, but how did I write it every time I used it??Will do
# *I forced it to one line to make it one liner, but if I break it properly, it becomes ↓
# for i in {1..10} ;
# do
# cmd1;
# done
# * {1..10}:Brace deployment(Serial number ver):Expanded to 1 2 3 4 5 6 7 8 9 10
# *instead of$(seq 10)But OK
$ for i in {1..10} ; do echo $i; done
1
2
3
4
5
6
7
8
9
10
I use it a lot, but it's probably out of scope, I rarely use it, but I don't really know much Commands that I wanted to omit for reasons such as ... I will list only the items.
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
vi | Edit file | visual editor(visual interface |
make | Compiling the program | make |
curl | Make an HTTP request | command url?? |
rsync | Synchronize directory contents over the network | remote synchronizer |
ssh-keygen | Create private and public keys for ssh | ssh key generator |
npm | Install node packages, etc. | node package manager |
git | use git(Miscellaneous) | Stupid in British English slang<-I knew it for the first time |
I don't use it at all, but I wanted to introduce it, so I wrote it. It was fun.
Command name | What can i do? | Command nameは何に由来している? |
---|---|---|
nice | Adjust user priority | |
sl | Train comes out | Opposite of ls |
# nice -n 20 cmd1:Run cmd1 with priority 20
# *Each linux user has a nice value priority
# -20(Highest priority)~20(Lowest priority)
# *It may be used when you want to execute batch processing with high CPU load in the gap time of other processes.(I have no experience)
# *Yutaka Takano's "From root/I learned from the essay book "Message to".
#When unix came to Japan, cpu resources were valuable
#It seems that the root administrator raised the nice value to counter the bad user who runs heavy processing bang bang.
# *I don't know why this episode is, but I really like it
$ nice -n 20 ls
src dist
Reference: [Message from root to / (Amazon)](https://www.amazon.co.jp/root-%E3%83%AB%E3%83%BC%E3%83%88-%E3% 81% 8B% E3% 82% 89-% E3% 81% B8% E3% 81% AE% E3% 83% A1% E3% 83% 83% E3% 82% BB% E3% 83% BC% E3% 82 % B8% E2% 80% 95% E3% 82% B9% E3% 83% BC% E3% 83% 91% E3% 83% BC% E3% 83% A6% E3% 83% BC% E3% 82% B6 % E3% 83% BC% E3% 81% 8C% E8% A6% 8B% E3% 81% 9F% E3% 81% B2% E3% 81% A8% E3% 81% A8% E3% 82% B3% E3 % 83% B3% E3% 83% 94% E3% 83% A5% E3% 83% BC% E3% 82% BF / dp / 4756107869)
When executed, the train will run to the terminal. When you typo, let's calm down by watching the train. (It's nice because it's full of humor because Ctrl + C is carefully disabled)
For those who want to know more: Useless at work! Linux news command collection
Command line operation is fun. Also, the story around the Linux (Unix) system is interesting. We hope that this article will help you discover and expand your interests.
Recommended Posts