… note::
寒蝉凄切,对长亭晚,骤雨初歇。
柳永《雨霖铃》
Linux alias
命令用于设置指令的别名,可以将比较长的命令进行简化。
默认情况下会输出当前的设置:
$ alias
l='ls -lah'
la='ls -lAh'
ll='ls -lh'
ls='ls --color=tty'
所以此时输入ll
以后,就相当于输入了ls -lh
。
给命令设置别名也很简单,方法为:
$ alias newcommand='command setting'
比如:
$ alias ll='ls -lh' # 相当的实用
不过需要注意的时,这个命令如果在终端操作,关闭后并不会保持。
如果需要每次都可以使用,需要将这个命令输入到.bashrc
中。
比较常用的一些为:
$ alias
alias cp='cp -i'
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde
#对路径切换很有用
$ alias ..='cd ..'
$ alias ...='cd ../../../'
$ alias ....='cd ../../../../'
$ alias .....='cd ../../../../'
$ alias .4='cd ../../../../'
$ alias .5='cd ../../../../..'
#获取disk的信息##
$ alias df='df -H'
$ alias du='du -ch'
#设置一些系统信息
$ alias cpuinfo='lscpu'
$ alias meminfo='free -h'
在比如一个稍微复杂一点的:
$ alias lt='ls --human-readable --size -1 -S --classify'
lt
将排序并显示一个总的文件大小。
当前,可以设定alias,也可以清除,只要使用unalias即可。
… note::
旧时月色,算几番照我,梅边吹笛。
宋·姜夔《暗香》
apropos
的中文含义就是恰好的、合适的,奈何这个单词或者命令确实不好记,当然是可以扩充词汇量的。
什么时候会用到这个命令呢,先看看这个命令的定义。
apropos
命令的官方定义为:
search the manual page names and descriptions
意思很明显,如果我不记得命令或者不知道该用什么命令的时候,可以通过关键词来索引查找这些命令,比如我们想用linux绘制图像,但是不知道什么命令,测试可以使用:
$ apropos plot
bno_plot (1) – generate interactive 3D plot of IO blocks and sizes
gnuplot (1) – an interactive plotting program
pbmtoplot (1) – convert a PBM image into a Unix 'plot' file
或许每个人的输出不同,这个主要取决于安装的软件包和索引的数据库。以上。
再来一个实例,这个应该大部分的都类似:
$ apropos who
at.allow (5) - determine who can submit jobs via at or batch
at.deny (5) - determine who can submit jobs via at or batch
btrfs-filesystem (8) - command group of btrfs that usually work on the whole filesystem
docker-trust-signer (1) - Manage entities who can sign Docker images
ipsec_newhostkey (8) - generate a new raw RSA authentication key for a host
ipsec_showhostkey (8) - show host's authentication key
w (1) - Show who is logged on and what they are doing.
who (1) - show who is logged on
who (1p) - display who is on the system
whoami (1) - print effective userid
这个命令平时用的不多,跟whatis类似,因为这些功能都被加到了包罗万象的man命令。
… note::
当年不肯嫁春风,无端却被秋风误。
贺铸《芳心苦·杨柳回塘》
… note::
北斗南辰日夜移,飞走鸟和兔。
元·王哲《卜算子·叹世迷》
Linux bc
命令是一种支持任意精度的交互执行的命令。
bc
也是一种支持交互式执行语句的任意精度数的语言。与C语言有一些相似之处。 标准数学库也可以由 通过命令行选项使用。
官方定义为:
bc
- An arbitrary precision calculator language
使用方法为:
$ bc [ -hlwsqv ] [long-options] [ file ... ]
默认进入交互环境,直接可以执行计算
$ bc
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
12*3
36
输入quit
即可退出。
简单的情况可以使用管道来实现,如下:
$ echo "3.1415926 * 3" | bc
9.4247778
可以通过scale
来指定一些精度信息,如下可以保留3为有效精度
$ echo "scale=3; 2/3" | bc
.666
另外还可以使用一些数学函数,比如:
$ echo "sqrt(36)" | bc
6
还可以方便地使用ibase
进行进制的转换,下面分别是输入为111,对应在2、4、8进制下的输出。
$ echo 'ibase=2;111' | bc
7
$ echo 'ibase=4;111' | bc
21
$ echo 'ibase=8;111' | bc
73
当然也可以通过obase
来指定输入进制,如下将输入的8进制的111,分别转换为2、4、8进制。
$ echo 'ibase=8;obase=2;111' | bc
1001001
$ echo 'ibase=8;obase=4;111' | bc
1021
$ echo 'ibase=8;obase=8;111' | bc
111
… note::
众里寻他千百度。蓦然回首,那人却在,灯火阑珊处。
宋·辛弃疾《青玉案·元夕》
cal
用于显示当前日历信息或者指定日期的公历信息。
cal
的官方定义为:
cal, ncal — displays a calendar and the date of Easter
cal
也是来自于calendar的前三个字母。
其用法有好几种,比如可以为:
$ cal [-31jy] [-A number] [-B number] [-d yyyy-mm] [[month] year]
$ cal [-31j] [-A number] [-B number] [-d yyyy-mm] -m month [year]
$ ncal [-C] [-31jy] [-A number] [-B number] [-d yyyy-mm] [[month] year]
$ ncal [-C] [-31j] [-A number] [-B number] [-d yyyy-mm] -m month [year]
$ ncal [-31bhjJpwySM] [-A number] [-B number] [-H yyyy-mm-dd] [-d yyyy-mm] [-s country_code] [[month] year]
$ ncal [-31bhJeoSM] [-A number] [-B number] [-d yyyy-mm] [year]
cal
可以没有参数,也可以多个参数组合。
[[month] year]
的含义是,比如有year这个参数,然后可以出现month year两个参数。
主要使用的参数为:
-3
:显示前后和当前3个月的日历-y
:显示一年的日历,此时不要指定月份参数-j
:显示在当年中的第几天(儒略日)默认无参数会显示当前的月份等信息
$ cal
February 2011
Su Mo Tu We Th Fr Sa
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28
比如希望看看2012年12月份,可以运行如下命令:
$ cal 12 2012
December 2012
Su Mo Tu We Th Fr Sa
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31
-3
将显示当前月份、前一个月、后一个月,共计3个月的日历。
$ cal -3
2011
January February March
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 1 2 3 4 5
2 3 4 5 6 7 8 6 7 8 9 10 11 12 6 7 8 9 10 11 12
9 10 11 12 13 14 15 13 14 15 16 17 18 19 13 14 15 16 17 18 19
16 17 18 19 20 21 22 20 21 22 23 24 25 26 20 21 22 23 24 25 26
23 24 25 26 27 28 29 27 28 27 28 29 30 31
30 31
使用-y
参数,可以查看一年的日历。
$ cal -y
2011
January February March
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 1 2 3 4 5
2 3 4 5 6 7 8 6 7 8 9 10 11 12 6 7 8 9 10 11 12
9 10 11 12 13 14 15 13 14 15 16 17 18 19 13 14 15 16 17 18 19
16 17 18 19 20 21 22 20 21 22 23 24 25 26 20 21 22 23 24 25 26
23 24 25 26 27 28 29 27 28 27 28 29 30 31
30 31
April May June
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 1 2 3 4 5 6 7 1 2 3 4
3 4 5 6 7 8 9 8 9 10 11 12 13 14 5 6 7 8 9 10 11
10 11 12 13 14 15 16 15 16 17 18 19 20 21 12 13 14 15 16 17 18
17 18 19 20 21 22 23 22 23 24 25 26 27 28 19 20 21 22 23 24 25
24 25 26 27 28 29 30 29 30 31 26 27 28 29 30
July August September
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 1 2 3 4 5 6 1 2 3
3 4 5 6 7 8 9 7 8 9 10 11 12 13 4 5 6 7 8 9 10
10 11 12 13 14 15 16 14 15 16 17 18 19 20 11 12 13 14 15 16 17
17 18 19 20 21 22 23 21 22 23 24 25 26 27 18 19 20 21 22 23 24
24 25 26 27 28 29 30 28 29 30 31 25 26 27 28 29 30
31
October November December
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 1 2 3
2 3 4 5 6 7 8 6 7 8 9 10 11 12 4 5 6 7 8 9 10
9 10 11 12 13 14 15 13 14 15 16 17 18 19 11 12 13 14 15 16 17
16 17 18 19 20 21 22 20 21 22 23 24 25 26 18 19 20 21 22 23 24
23 24 25 26 27 28 29 27 28 29 30 25 26 27 28 29 30 31
30 31
-j
用于显示儒略日,这里的儒略日的概念为从1月1日开始计算的多少天,这个在倒计时的时候挺好用的。
$ cal -j
cal 02 2011 -j
February 2011
Su Mo Tu We Th Fr Sa
32 33 34 35 36
37 38 39 40 41 42 43
44 45 46 47 48 49 50
51 52 53 54 55 56 57
58 59
… note::
此去经年,应是良辰好景虚设
宋 柳永《雨霖铃》
cat命令可用于输出文件的内容到标准输出。
cat
的官方定义为:
concatenate files and print on the standard output
翻译过来就是:把档案串连接后传到基本输出
其用法一般为:
$ cat [OPTION]... [FILE]...
cat
命令的可选参数[OPTION]
如下所示:
-n
或 --number
: 由 1 开始对所有输出的行数编号-b
或 --number-nonblank
: 和 -n 相似,只不过对于空白行不编号-s
或 --squeeze-blank
:当遇到有连续两行以上的空白行,就代换为一行的空白行-T
或--show-tabs
:显示TAB字符,显示为^I
-E
或--show-ends
:显示行末符号,字符为$
-A
或--show-all
:显示所有的信息此时假定我们的文件为hello.c,内容为最经典的:
#include <stdio.h>
int main(int argc, char * argv[])
{
printf("Hello World\n");
return 0;
}
接下来的实例全部根据这个文件展开,Hello World. Hello Linux
$ cat hello.c
#include <stdio.h>
int main(int argc, char * argv[])
{
printf("Hello World\n");
return 0;
}
$ cat -n hello.c
1 #include <stdio.h>
2
3 int main(int argc, char * argv[])
4 {
5 printf("Hello World\n");
6
7 return 0;
8 }
$ cat -E hello.c
#include <stdio.h>$
$
int main(int argc, char * argv[])$
{$
printf("Hello World\n");$
$
return 0;$
}$
cat -T hello.c
#include <stdio.h>
int main(int argc, char * argv[])
{
^Iprintf("Hello World\n");
^I
^Ireturn 0;
}
此时可以看到^I,which means Tab charcter.
比如,此时希望看到你的源码文件一共多少行,每行代表什么意思,就可以把含有行号的输入通过管道发送到另外一个文件,如下所示:
$ cat -n hello.c > hello_number.c
$ cat hello_number.c
1 #include <stdio.h>
2
3 int main(int argc, char * argv[])
4 {
5 printf("Hello World\n");
6
7 return 0;
8 }
其他的一些选项可以自行尝试。
… note::
月桥花院,琐窗朱户,只有春知处。
宋 辛弃疾《青玉案·元夕》
cd
命令应该是除了 ls
命令外用的最多的命令了。除非你大门不出二门不迈,做个大家闺秀。
cd
命令的含义为
cd - change directory
可以让我们访问不同的文件夹。
最简单的用法为:
$ cd /the/path/you/want/to/go/
接下来说一些技巧,让效率加倍。
如果你需要同时显示大写和小写的目录名(即便是你给的参数只是小写的),执行下面的bind
命令,此时就可以避免Linux
和linux
的尴尬。
$ bind "set completion-ignore-case on"
想要进入刚才进入的地方(目测没有很多人再用,但是真的很好用)运行:
$ cd –
需要快速地回到你的家目录,输入cd
即可,这里其实不用一级一级的进入
$ cd
这个需要你有root权限
cd ~username
进入username的家目录。
这些是一些比较基础和入门的,还有一些高级一点的,这些技巧可能用的比较少,不过也是很有帮助的。
变量CDPATH定义了目录的搜索路径:
$ export CDPATH=/the/path/you/add/:/another/path/
现在,不用输入cd /the/path/you/add/hello/
这样长了,我可以直接输入下面的命令进入 /the/path/you/add/hello/
:
$ cd html
这个命令目测,用的人不多,其实比较有用,且有效。
$ cd !$
表明的意思是将上一个命令的参数作为cd的参数来使用。
使用shopt -s cdspell
可以自动修正cd
时拼写错误的目录名。
如果你在输入时经常犯些错误,这个命令是很有用的。详见以下示例:
# cd /etc/mall
-bash: cd: /etc/mall: No such file or directory
# shopt -s cdspell
# cd /etc/mall
# pwd
/etc/mail
注: 当我错误的把mail敲成了mall,用这个命令mall就自动被换成了mail
… code::
鸿鹄志、向炎天。
宋 刘克庄《贺新郎·杜子昕凯歌》
Linux的chgrp
命令用于变更文件或者目录所属的组group。
这里的变更不仅限于本人的组,只要用户属于的组,均可以使用chgrp
更改相应的权限而不是必须使用管理员权限。
很多权限的操作可以与chmod
来交叉。比如如果希望文件只有本组成员访问,可以通过chmod 770 file/directory
,此时就涉及到组的概念了。
官方定义为:
chgrp
- change group ownership
语法为:
$ chgrp [OPTION]... GROUP FILE...
$ chgrp [OPTION]... --reference=RFILE FILE...
常用的几个参数为:
--reference=RFILE
: 参考指定文件进行所属组更换
-R, --recursive
:递归处理,将某个目录的所有文件均更改用户组
最简单的使用为将文件file归属到组group,使用方法为:
$ chgrp group file
$ chgrp group1 file1
此时的file数组组group,file1属于组group1。
对于文件夹而言,就需要使用-R
参数来递归实现了,不然会报错的。
$ chgrp -R group1 directory1
$ chgrp -R group2 directory2
这个参数比较有趣,也比较高效,如果希望某个用户的组权限与另外一个文件一致,此时--reference
强势出现
$ chgrp --reference=ref_file stage_file
该命令执行后,stage_file的权限将与ref_file的组权限一样。
设置SGID属性(确保NEWGROUP组拥有所有新建的文件),设置sticky(沾滞位)属性(以免文件被拥有者以外的其他人删除)
chmod g+s,o+t /home/groupdir
具体参考SGID/SUID/SBID以及sticky的详细含义。
文件权限指的是文件是否可以执行、写入、读取等操作。
而Linux/Unix的文件存取权限分为三级 : 文件所有者、用户组及其他,分别使用以下字母来表示:
如下图所示,每个级别都可以设置为rwx三种权限 。
该命令官方定义为:
chmod - change mode
所以可以通过chmod
来控制文件如何被他人所存取。
使用的语法如下所示:
$ chmod [-cfvR] [--help] [--version] mode file...
其中mode权限设定的格式如下 : [ugoa] [±=] [rwxX]
其中u表示该文件的拥有者,g表示与该文件的拥有者属于同一个群体(group)者,o表示其他以外的人,a表示这三者皆是。
+
表示加权限、–
表示减权限、=
表示设定唯一权限。r
表示可读取,w
表示可写入,x
表示可执行,X
表示只有当该文件是个子目录或者该文件已经被设定过为可执行。-R
: 对目前目录下的所有文件与子目录进行相同的权限变更(即以递回的方式逐个变更)对于chmod的使用而言,只有文件所有者和超级用户可以修改文件或目录的权限。
具体的方法为可以使用符号模式或者绝对模式来进行操作。
而我比较喜欢用的是绝对数字模式,比较粗暴简单。
使用符号模式需要考虑多个因素,其中包括用户类型,操作符 和设定权限。
who | 用户类型 | 说明 |
---|---|---|
u | user | 文件所有者 |
g | group | 文件所有者所在组 |
o | others | 所有其他用户 |
a | all | 所用用户, 相当于 ugo |
operator 的符号模式表:
Operator | 说明 |
---|---|
+ | 为指定的用户类型增加权限 |
- | 去除指定用户类型的权限 |
= | 设置指定用户权限的设置,即将用户类型的所有权限重新设置 |
permission 的符号模式表:
模式 | 名字 | 说明 |
---|---|---|
r | 读 | 设置为可读权限 |
w | 写 | 设置为可写权限 |
x | 执行权限 | 设置为可执行权限 |
chmod
命令可以使用八进制数来指定权限。文件或目录的权限位是由9个权限位来控制,每三位为一组,它们分别是文件所有者的读、写、执行权限,用户组的读、写、执行以及其它用户的读、写、执行。历史上,文件权限被放在一个比特掩码中,掩码中指定的比特位设为1,用来说明一个类具有相应的优先级。比如下面的0-7分别表示各自的权限定义。
No | 权限 | rwx | 二进制 |
---|---|---|---|
7 | 读 + 写 + 执行 | rwx | 111 |
6 | 读 + 写 | rw- | 110 |
5 | 读 + 执行 | r-x | 101 |
4 | 只读 | r– | 100 |
3 | 写 + 执行 | -wx | 011 |
2 | 只写 | -w- | 010 |
1 | 只执行 | –x | 001 |
0 | 无 | — | 000 |
如表所示:
其他类似。
接下来将文件 a.c 设为所有人皆可读取 ,有三种方式可以使用,如下,分别为 :
chmod ugo+r filename
chmod a+r filename
chmod 444 filename
具体如下所示:
通过方法1:
# 默认设定为没有任何属性
$ ll
# 更改为全部可读
$ chmod ugo+r file1.txt
$ ll
-r--r--r-- 1 user user 5KB Feb 12 22:22 a.c
通过方法2:
# 默认设定为没有任何属性
$ ll
# 更改为全部可读
$ chmod a+r file1.txt
$ ll
-r--r--r-- 1 user user 5KB Feb 12 22:23 a.c
通过方法3:
# 默认设定为没有任何属性
$ ll
# 更改为全部可读
$ chmod 444 file1.txt
$ ll
-r--r--r-- 1 user user 5KB Feb 12 22:24 a.c
接下来继续把文件 a.c设置为用户 和组可以读写,而其他 用户无法写入但是 可以查看 。
使用符号模式如下:
$ ll
-r--r--r-- 1 user user 5KB Feb 12 22:24 a.c
$ chmod ug+rw,o+r,o-w a.c
$ ll
-rw-rw-r-- 1 user user 5KB Feb 12 22:26 a.c
使用数字模式如下:
$ ll
-r--r--r-- 1 user user 5KB Feb 12 22:24 b.c
$ chmod 664 a.c
$ ll
-rw-rw-r-- 1 user user 5KB Feb 12 22:26 b.c
此时不管文件的权限是什么,因为只具有可执行权限,所以符号模式可以使用**=**,而数字模式只需要1即可,如下:
$ chmod a=x filename
#或者
$ chmod 111 filename
# 无法读取
$ cat a.c
cat: a.c: Permission denied
所以对于只有可执行权限的文件,是无法执行读取或者写入操作的,这也保证了文件的安全性。
其实对于每个文件或者目录而言,除了rwx权限,还有 一个权限位,这个权限为一般为特殊权限。
模式 | 名字 | 说明 |
---|---|---|
X | 特殊执行权限 | 只有当文件为目录文件,或者其他类型的用户有可执行权限时,才将文件权限设置可执行 |
s | setuid/gid | 当文件被执行时,根据who参数指定的用户类型设置文件的setuid或者setgid权限 |
t | 粘贴位 | 设置粘贴位,只有超级用户可以设置该位,只有文件所有者u可以使用该位 |
若用 chmod 4755 filename 可使此程序具有 root 的权限。
命令 | 说明 |
---|---|
chmod 4755 *file* | 4 设置了设置用户ID位,剩下的相当于 u=rwx (4+2+1),go=rx (4+1 & 4+1)。 |
find path/ -type d -exec chmod a-x {} \; | 删除可执行权限对path/以及其所有的目录(不包括文件)的所有用户,使用’-type f’匹配文件 |
find path/ -type d -exec chmod a+x {} \; | 允许所有用户浏览或通过目录path/ |
Linux chown
命令用于设置文件所有者和文件关联组的命令。
官方的定义为:
chown - change file owner and group
Linux/Unix 的有个理念就是一切皆文件,而对于每个文件也是如chmod
所述,均拥有所有者。
此时就可以利用 chown
指定文件的拥有者或者指定的用户或组,用户可以是用户名或者用户 ID,组可以是组名或者组 ID,文件是以空格分开的要改变权限的文件列表,支持通配符。
不过需要注意的是 chown
需要超级用户 root 的权限才能执行此命令,或者使用sudo
也可以。
使用语法如下:
$ chown [option] [user[:group]] file...
# 或者
$ chown [option] --reference=RFILE file...
其中user为新的文件拥有者的用户名或者ID,group为新的文件拥有者的用户组名或ID****。
并且可以通过--referenc=RFILE
选项来设定希望修改的文件和目录。
其他的选项可以为: :
-c
: 与-v
类似,不过只显示更改的信息-R
: 递归地处理指定的目录以及其子目录下的所有文件最简单的使用方式应该就是指定用户和用户组了,如下:
$ ll
-rw-rw-r--. 1 user user 5 May 7 14:56 a
$ sudo chown user1:group1 a
$ ll
-rw-rw-r--. 1 user1 group1 5 May 7 14:56 a
上面的命令将把a指定为用户user1,组group1。注意user1和group1必须存在,不然会提示无效的用户或者组。
这个选项一般用在,希望把某个用户的文件共享到一个组,此时的方法如下:
$ sudo chown :newgroup filename
此时的用户所有者不变,而仅仅更改了文件所属组。
$ sudo chown -c user1 a b c d
changed ownership of "b" from user to user1
changed ownership of "c" from user to user1
changed ownership of "d" from user to user1
$ sudo chown -v user1 a b c d
changed ownership of "a" from user to user1
changed ownership of "b" from user to user1
changed ownership of "c" from user to user1
changed ownership of "d" from user to user1
从这个例子可以看出,对于-c和-v的区别,-v全部显示,而-c仅仅显示更新的部分。
$ sudo chown -R user:group file directory
此条命令将递归地将文件file和目录directory及其子目录的文件更新为user用户拥有,group组拥有。—
cp
命令很简单,字面的意思,copy的缩写,意指拷贝数据。
官方含义为:
cp - copy files and directories
– 拷贝文件和文件夹。
简单的格式如下所示,cp
后面跟上选项,然后是SRC
,最后是DEST
。
$ cp [option]... SOURCE... DIRECTORY
下面说几个最常用的选项实例。
首先假设有两个文件夹dir1
和dir2
,里面的内容如下所示:
dir1
├── a
├── b
├── c
└── d
dir2
├── b
├── d
└── e
0 directories, 7 files
详细信息如下所示:
$ ll *
dir1:
total 0
-rw-rw-r-- 1 user user 0 Jul 20 21:23 a
-rw-rw-r-- 1 user user 0 Jul 20 21:23 b
-rw-rw-r-- 1 user user 0 Jul 20 21:23 c
-rw-rw-r-- 1 user user 0 Jul 20 21:23 d
dir2:
total 0
-rw-rw-r-- 1 user user 0 Jul 20 21:25 b
-rw-rw-r-- 1 user user 0 Jul 20 21:25 d
-rw-rw-r-- 1 user user 0 Jul 20 21:25 e
cp
最常用的选项如下所示:
i
: 覆盖一个已经存在的文件前,提示用户进行确认r
:递归地复制目录及其内容,复制目录的时候必须使用这个参数u
:只复制不存在或者更新的文件v
:复制文件时,显示复制信息这个在显示复制信息的时候,也可以复制目录
$ cp -rv dir1/* dir2/
‘dir1/a’ -> ‘dir2/a’
‘dir1/b’ -> ‘dir2/b’
‘dir1/c’ -> ‘dir2/c’
‘dir1/d’ -> ‘dir2/d’
这个参数在使用rm的时候已经记得使用,不然就像rm -rf /
一样,一个公司没有了。
$ cp -i dir1/* dir2/
cp: overwrite ‘dir2/a’? y
cp: overwrite ‘dir2/b’? y
cp: overwrite ‘dir2/c’? y
这个选项在文件超级多时候,慎用!!
u
表示update
,也就是从一个目录拷贝到另外一个目录时,只会复制那些不存在或者目标目录相应文件的更新文件。
执行下面的命令:
$ cp -u dir1/* dir2/
可以得到:
$ ll *
dir1:
total 0
-rw-rw-r-- 1 user user 0 Jul 20 21:23 a
-rw-rw-r-- 1 user user 0 Jul 20 21:23 b
-rw-rw-r-- 1 user user 0 Jul 20 21:23 c
-rw-rw-r-- 1 user user 0 Jul 20 21:23 d
dir2:
total 0
-rw-rw-r-- 1 user user 0 Jul 20 21:29 a
-rw-rw-r-- 1 user user 0 Jul 20 21:25 b
-rw-rw-r-- 1 user user 0 Jul 20 21:29 c
-rw-rw-r-- 1 user user 0 Jul 20 21:25 d
-rw-rw-r-- 1 user user 0 Jul 20 21:25 e
现在有文件夹filename
,内有文档,名字是从1.txt, 2.txt, 3.txt
一直到9999.txt,10000.txt
,现在希望从第N
组数据即N.txt
到第M
组数据M.txt
的文件拷贝到别的文件夹中,方法如下:
$ cp {N..M}.txt newfilename/
这个方法可是相当的赞呀(≧▽≦)/,基本可以秒掉大多数的GUI程序了。—
Linux curl
命令是一款用于从一个server端传输的工具。
很强力,支持众多协议,比如:DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP,
SMTPS, TELNET 和 TFTP).
这个命令设计之初也是希望不需要用户的交互和介入。
官方定义为:
curl - transfer a URL
$ curl [options / URLs]
参数:
-O
: 把输出写到该文件中,保留远程文件的文件名-u
: 通过服务端配置的用户名和密码授权访问默认情况下,将下载的数据写入到文件,并且使用服务器上的名字,这里以下载Linux的内核代码为例。
$ curl https://mirrors.edge.kernel.org/pub/linux/kernel/v2.4/linux-2.4.32.tar.gz -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
1 36.7M 1 575k 0 0 17431 0 0:36:50 0:00:33 0:36:17 27222
部分网站可能需要访问的授权,此时可以使用-u
选项提供用户名和密码进行授权:
$ curl -u username https://www.website.com/
Enter host password for user 'username':
当然,这么强力的工具,肯定是支持批量下载的,并且是正则表达式的支持。
比如:🔗ftp://ftp.example.com/的file1,file5和file7,方法如下:
$ curl ftp://ftp.example.com/file{1,5,7}.txt
如果下载🔗ftp://ftp.example.com/的从file1到file100的100组文件,方法如下:
$ curl ftp://ftp.example.com/file[1-100].txt
… note::
时光只解催人老,不信多情,长恨离亭,泪滴春衫酒易醒。
- 晏殊《采桑子·时光只解催人老》
date命令可以用来打印显示亦或者更改日期和时间。
看看官方的定义如下:
date - print or set the system date and time
用法如下:
$ date [OPTION]... [+FORMAT]
$ date [-u | --utc| --universal] [MMDDhhmm[[CC]YY][.ss]]
较常用的OPTION为:
-R
: 显示时区-u, --utc, --universal
:打印或者设置世界协调时-d, --date=STRING
:显示STRING的时间date 命令默认情况下为CST时区,
$ date
Mon Jun 5 15:11:44 CST 2014
如果加上 -R
参数就可以带上时区,比如我们的东八区
$ date -R
Mon, 05 Jun 2014 15:15:25 +0800
选项-u, --utc, --universal
可以显示世界协调时
$ date -u
Mon Jun 5 07:15:46 UTC 2014
$ date --utc
Mon Jun 5 07:15:48 UTC 2014
$ date --universal
Mon Jun 5 07:15:55 UTC 2014
可以通过不同的参数来格式化日期,这里需要注意的是:不同的大小写代表的是不同的含义
比较常用的日期和时间如下:
# 显示年月日时分秒
$ date +%Y-%m-%dT%H:%M:%S
2013-01-17T18:01:08
# 或者 下面一样的效果
$ data +%FT%T
2013-01-17T18:02:12
```---
# Linux 的 dd 命令
`dd`这个命令一直没有弄明白缩写的含义,这个命令应该归到Linux炫技里面,因为我也是很晚才用到,不过有些功能还可以尝试一下。
官方含义为:
> `dd` - convert and copy a file
> 从官方含义来看,是不是定义为`cc`比较合适,^_^
`dd`命令用于复制文件,转换或者格式化文件。
`dd`命令功能很强大的,对于一些比较底层的问题,使用dd命令往往可以得到出人意料的效果。
## 命令格式
命令比较简单:
```bash
$ dd 选项
对于刚开始而言,仅仅下面几个掌握下面几个参数就完全够用了。
cp
命令既然命令第一个说明就是拷贝文件,那么正常情况下基本是可以替换cp的,不过前提是有参数指定,比如:
# 默认cp拷贝,一个1GB的文件,花费1.05秒
$ time cp a b
cp a b 0.02s user 1.05s system 75% cpu 1.403 total
# 默认dd拷贝,一个1GB的文件,竟然花费了29.17秒
$ time dd if=a of=b
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 34.7214 s, 30.2 MB/s
dd if=a of=b 1.31s user 29.17s system 87% cpu 34.996 total
为什么dd
这么慢,很简单,在不指定bs的情况下,默认为512字节,dd
就会根据512来切分,时间都浪费在了这个上面。
所以简单地加上这个参数,迅速提升效率
$ time dd if=a of=b bs=2M
500+0 records in
500+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 1.04747 s, 1.0 GB/s
dd if=a of=b bs=2M 0.00s user 1.05s system 78% cpu 1.332 total
$ time dd if=a of=b bs=4M
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 1.00866 s, 1.0 GB/s
dd if=a of=b bs=4M 0.00s user 1.00s system 76% cpu 1.304 total
$ time dd if=a of=b bs=8M
125+0 records in
125+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.937974 s, 1.1 GB/s
dd if=a of=b bs=8M 0.00s user 0.92s system 79% cpu 1.164 total
$ time dd if=a of=b bs=10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 1.01666 s, 1.0 GB/s
dd if=a of=b bs=10M 0.00s user 1.03s system 82% cpu 1.257 total
我最常使用的dd命令的用例是,测试硬盘的读写速度,比如很简单地写入1GB、10GB来看一下。
$ dd if=/dev/zero of=tmp bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.7338 s, 1.4 GB/s
$ dd if=/dev/zero of=tmp bs=2M count=500
500+0 records in
500+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.611315 s, 1.7 GB/s
$ dd if=/dev/zero of=tmp bs=4M count=250
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.602517 s, 1.7 GB/s
然后根据这些参数,可以简单写一个脚本来评估系统的整体读写速率了。
当然dd
系统管理员用的最多的应该是系统备份和克隆了,暂且不表。
使用man
来查看df
,官方含义为:
report file system disk space usage
也就是查看文件系统的磁盘空间占用情况,可以利用该命令来获取硬盘被占用了多少空间,目前还剩下多少空间等信息。
这个命令的使用也是中规中矩,df [options]
,其中一些比较有用的选项为:
-a
, --all
,这个用的不多,不过可以把所有的信息,包括无法访问的一一列出来-B
, --block-size=SIZE
,以SIZE为单位显示,比如M/T分别按照MB和TB来显示--total
:比较好用的是,提供了一个总的使用比例出来-h
, --human-readable
:这个比较友好,也是最常用的一个选项-H
, --si
:强迫症必备,如果非要认为1K是1000而不是1024
.-l
, --local
:对于目前网络挂载NFS等等必须的一个选项-T
, --print-type
:打印文件系统的类型,比如xfs,比如zfs等等如果不加任何选项,输出如下:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl-root 976083292 242281612 733801680 25% /
devtmpfs 16315508 0 16315508 0% /dev
tmpfs 16332416 18788 16313628 1% /dev/shm
tmpfs 16332416 1643588 14688828 11% /run
tmpfs 16332416 0 16332416 0% /sys/fs/cgroup
/dev/sdb2 1038336 407812 630524 40% /boot
/dev/sda 93759481856 72887620044 20871861812 78% /data
/dev/mapper/cl-home 32210167688 29543283400 2666884288 92% /home
tmpfs 3266484 236 3266248 1% /run/user/1000
其实我比较想知道data目录到底是多大,哈哈
这个选项虽然可以输出所有的信息,但是有些真的不是一般人需要并且想要的。
$ df -a
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs - - - - /
sysfs 0 0 0 - /sys
proc 0 0 0 - /proc
devtmpfs 16315508 0 16315508 0% /dev
securityfs 0 0 0 - /sys/kernel/security
tmpfs 16332416 18788 16313628 1% /dev/shm
devpts 0 0 0 - /dev/pts
tmpfs 16332416 1643588 14688828 11% /run
tmpfs 16332416 0 16332416 0% /sys/fs/cgroup
cgroup 0 0 0 - /sys/fs/cgroup/systemd
pstore 0 0 0 - /sys/fs/pstore
cgroup 0 0 0 - /sys/fs/cgroup/memory
cgroup 0 0 0 - /sys/fs/cgroup/pids
cgroup 0 0 0 - /sys/fs/cgroup/freezer
cgroup 0 0 0 - /sys/fs/cgroup/perf_event
cgroup 0 0 0 - /sys/fs/cgroup/net_cls,net_prio
cgroup 0 0 0 - /sys/fs/cgroup/blkio
cgroup 0 0 0 - /sys/fs/cgroup/cpuset
cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct
cgroup 0 0 0 - /sys/fs/cgroup/devices
cgroup 0 0 0 - /sys/fs/cgroup/hugetlb
configfs 0 0 0 - /sys/kernel/config
/dev/mapper/cl-root 976083292 242283596 733799696 25% /
selinuxfs 0 0 0 - /sys/fs/selinux
systemd-1 - - - - /proc/sys/fs/binfmt_misc
debugfs 0 0 0 - /sys/kernel/debug
mqueue 0 0 0 - /dev/mqueue
hugetlbfs 0 0 0 - /dev/hugepages
/dev/sdb2 1038336 407812 630524 40% /boot
/dev/sda 93759481856 72887620044 20871861812 78% /data
/dev/mapper/cl-home 32210167688 29543283400 2666884288 92% /home
sunrpc 0 0 0 - /var/lib/nfs/rpc_pipefs
tmpfs 3266484 236 3266248 1% /run/user/1000
gvfsd-fuse 0 0 0 - /run/user/1000/gvfs
fusectl 0 0 0 - /sys/fs/fuse/connections
binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc
如果知道硬盘的空间或存储在TB量级就可以用BT
了,如果是PB量级的,恭喜你,可以用BP
.
$ df -BT
Filesystem 1T-blocks Used Available Use% Mounted on
/dev/mapper/cl-root 1T 1T 1T 25% /
devtmpfs 1T 0T 1T 0% /dev
tmpfs 1T 1T 1T 1% /dev/shm
tmpfs 1T 1T 1T 11% /run
tmpfs 1T 0T 1T 0% /sys/fs/cgroup
/dev/sdb2 1T 1T 1T 40% /boot
/dev/sda 88T 68T 20T 78% /data
/dev/mapper/cl-home 30T 28T 3T 92% /home
tmpfs 1T 1T 1T 1% /run/user/1000
此时total发挥出绝佳的作用,在最后一行输出一个总占比
$ df --total
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl-root 976083292 242283596 733799696 25% /
devtmpfs 16315508 0 16315508 0% /dev
tmpfs 16332416 18788 16313628 1% /dev/shm
tmpfs 16332416 1643588 14688828 11% /run
tmpfs 16332416 0 16332416 0% /sys/fs/cgroup
/dev/sdb2 1038336 407812 630524 40% /boot
/dev/sda 93759481856 72887620044 20871861812 78% /data
/dev/mapper/cl-home 32210167688 29543283400 2666884288 92% /home
tmpfs 3266484 236 3266248 1% /run/user/1000
我在单独拉出来秀一秀 :total 127015350412 102675257464 24340092948 81% -
这个是我用的很多的参数,应该也是最常用的,-h
的含义前面也可以看到是human-read
的意思,方便我们人类,会使用M、G这样的单位来区别
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 931G 232G 700G 25% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 19M 16G 1% /dev/shm
tmpfs 16G 1.6G 15G 11% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sdb2 1014M 399M 616M 40% /boot
/dev/sda 88T 68T 20T 78% /data
/dev/mapper/cl-home 30T 28T 2.5T 92% /home
tmpfs 3.2G 236K 3.2G 1% /run/user/1000
如果非得说1K是1000,而不是1024,那么这个选项比较合适秀一下。
$ df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 1.0T 249G 752G 25% /
devtmpfs 17G 0 17G 0% /dev
tmpfs 17G 20M 17G 1% /dev/shm
tmpfs 17G 1.7G 16G 11% /run
tmpfs 17G 0 17G 0% /sys/fs/cgroup
/dev/sdb2 1.1G 418M 646M 40% /boot
/dev/sda 97T 75T 22T 78% /data
/dev/mapper/cl-home 33T 31T 2.8T 92% /home
tmpfs 3.4G 242k 3.4G 1% /run/user/1000
在网络发达的今天,各种挂载满天飞,NFS四处连接,如果不跟上l
选项,估计已经分不清哪个是哪个了。
$ df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl-root 976083292 242283596 733799696 25% /
devtmpfs 16315508 0 16315508 0% /dev
tmpfs 16332416 18788 16313628 1% /dev/shm
tmpfs 16332416 1643588 14688828 11% /run
tmpfs 16332416 0 16332416 0% /sys/fs/cgroup
/dev/sdb2 1038336 407812 630524 40% /boot
/dev/sda 93759481856 72887620044 20871861812 78% /data
/dev/mapper/cl-home 32210167688 29543283400 2666884288 92% /home
tmpfs 3266484 236 3266248 1% /run/user/1000
系统类型有很多,可以通过-T
选项来查找。
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl-root xfs 976083292 242283596 733799696 25% /
devtmpfs devtmpfs 16315508 0 16315508 0% /dev
tmpfs tmpfs 16332416 18788 16313628 1% /dev/shm
tmpfs tmpfs 16332416 1643588 14688828 11% /run
tmpfs tmpfs 16332416 0 16332416 0% /sys/fs/cgroup
/dev/sdb2 xfs 1038336 407812 630524 40% /boot
/dev/sda xfs 93759481856 72887620044 20871861812 78% /data
/dev/mapper/cl-home xfs 32210167688 29543283400 2666884288 92% /home
tmpfs tmpfs 3266484 236 3266248 1% /run/user/1000
… note::
草色烟光残照里,无言谁会凭阑意
宋代 柳永《蝶恋花·伫倚危楼风细细》
Linux diff
命令用于比较文件的差异。
当然还有很多比较文件的专业工具,但是如果在Linux命令行,这个是最原始最初的,也是开机即用的。
官方定义为:
GNU diff - compare files line by line
diff
会以逐行的方式,比较文本文件的不同。
如果指定要比较目录,则 diff
会比较目录中相同文件名的文件,但不会比较其中子目录。
$ diff [OPTION]... FILES
参数:
-c
显示所有内容,并标出不同之处。-u
以合并的方式来显示文件内容的不同。-y
或--side-by-side
两列输出显示文件的异同之处。假定有两个文件a和b,内容分别为:
$ cat a
This is a.
Hello a.
Hello World.
$ cat b
This is b.
Hello b.
Hello World.
默认情况下,直接输入下面命令即可:
$ diff a b
1,2c1,2
< This is a.
< Hello a.
> This is b.
> Hello b.
3a4
> One more line.
可以看到1,2c1,2,中间有一个字母c;3a4,中间有一个字母a。
那么a和c什么含义呢,中间的字母表示需要在第一个文件上做的操作(a=add,c=change,d=delete),然后才有后面的文件一致。
所以1,2c1,2表示1,2行更换后一致;3a4表示,增加一行后一致。
这种方式相对而言,就很亲民了,左右两边两列方便比对。
$ diff a b -y
This is a. | This is b.
Hello a. | Hello b.
Hello World. Hello World.
> One more line.
那么:
"|"表示前后2个文件内容有不同;
"<"表示后面文件比前面文件少了1行内容
">"表示后面文件比前面文件多了1行内容
这种模式会输出所有的文件内容,并显示不同之处,还包括具体的时间。
如下***
表示a的内容,---
表示b的内容。
$ diff a b -c
*** a 2013-03-04 23:20:20.322345200 +0800
***************
*** 1,3 ****
! This is a.
! Hello a.
Hello World.
! This is b.
! Hello b.
Hello World.
+ One more line.
这种模式会混合输出所有的文件内容,并显示不同之处,还包括具体的时间。
如下---
表示a的内容,+++
表示b的内容。
$ diff a b -u
@@ -1,3 +1,4 @@
-This is a.
-Hello a.
+This is b.
+Hello b.
Hello World.
+One more line.
# diff -w name_list.txt name_list_new.txt
2c2,3
< John Doe --- > John M Doe
> Jason Bourne
使用man
来查看du
,我们知道这个命令的含义为estimate file space usage
。
也就是查看文件系统的磁盘空间占用情况,可以利用该命令来获取硬盘被占用了多少空间,目前还剩下多少空间等信息。
命令的使用方法为:
$ du [options]... [FILE]...
其中一些比较有用的命令选项为:
-0
, --null
: 这个只是对输出有效果,把所有的输出放在一行
-a
, --all
:这个选项会统计所有的信息,而不只是文件夹
-B
, --block-size=SIZE
:类似于df命令
-c
, --total
:最后一行,显示一个统计信息
-d
, --max-depth=N
:指定统计目录的层级,只有在层级大于N时有效
-h
, --human-readable
:同df命令,自动优化显示
-l
, --count-links
:如果是硬链接,则计入大小
-s
, --summarize
:显示统计信息
$ du -sh
4.0G .
显示当前文件夹的总大小
$ du
2048000 ./original
4096000 .
$ du -0
2048000 ./original4096000 .
$ du -a
204800 ./xaa
204800 ./xab
204800 ./xac
204800 ./xad
204800 ./xae
204800 ./xaf
204800 ./xag
204800 ./xah
204800 ./xai
204800 ./xaj
2048000 ./original/dat1
2048000 ./original
0 ./tsta
4096000 .
$ du -BG
2G ./original
4G .
$ du -c
2048000 ./original
4096000 .
4096000 总用量
$ du -h
2.0G ./original
4.0G .
$ du -s
4096000 .
组合上面的几个参数,显示汇总信息,以及时间信息等等。
$ du -a --time --time-style=full-iso
200M 2014-06-21 22:18:45.551076154 +0800 ./xaa
200M 2014-06-21 22:18:45.752074291 +0800 ./xab
200M 2014-06-21 22:18:45.951072446 +0800 ./xac
200M 2014-06-21 22:18:46.149070610 +0800 ./xad
200M 2014-06-21 22:18:46.348068766 +0800 ./xae
200M 2014-06-21 22:18:46.563066772 +0800 ./xaf
200M 2014-06-21 22:18:46.762064928 +0800 ./xag
200M 2014-06-21 22:18:46.961063083 +0800 ./xah
200M 2014-06-21 22:18:47.167061173 +0800 ./xai
200M 2014-06-21 22:18:47.366059329 +0800 ./xaj
2.0G 2014-06-21 22:17:48.740602788 +0800 ./original/dat1
2.0G 2014-06-21 22:19:01.134931691 +0800 ./original
0 2014-06-21 22:17:46.501499784 +0800 ./tsta
4.0G 2014-06-21 22:15:57:46.501499784 +0800 .
echo
命令用于在终端设备上输出字符串或变量的值,类似于Python的print
和C语言的printf
,是Linux系统中最常用的命令之一。
其中输出字符串主要在shell脚本中使用,常用的还是输出变量的值。
命令格式为:echo [参数] [字符串]
其中常用的参数为:
输出一段字符串:
$ echo "Hello Linux"
Hello Linux
输出变量提取后的值:
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
一般使用在变量前加上 符号的方式提取出变量的值,例如: 符号的方式提取出变量的值,例如: 符号的方式提取出变量的值,例如:PATH,然后再用echo命令予以输出。或者直接使用echo命令输出一段字符串到屏幕上,起到给用户提示的作用。
其中的PATH与Windows的环境变量类似
几个Linux命令来输出:
$ echo `date`
Sat 12 Feb 2011 22:19:03 PM CST
查询上一次的执行结果
$echo $?
$?
是Shell中的一个特殊变量,表示上一条命令的退出状态,0表示成功。—
env
其实就是environment的缩写,用来查看或者修改当前的环境。
Linux是多用户的平台,为了每个用户都有自己的设置,env
使用了比较多的环境变量,比如echo $HOME
后不同的用户可以看到不同的路径。
官方定义为:
修改则可以用env
命令进行管理。
官方定义为:
env
- run a program in a modified environment
语法如下所示:
$ env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
常用的参数为:
-i
开始一个新的空的环境
-u
取消设置的的变量
-C
更改工作目录
-S
分割输入参数
默认情况下,输入env
会给出当前设置的环境和系统默认的环境。
$ env
HOSTTYPE=x86_64
LANG=en_US.utf8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:
NAME=LOCALHOST
HOME=/home/username
USER=username
LOGNAME=username
SHELL=/usr/bin/bash
SHLVL=1
PWD=/home/username/mycode/c
OLDPWD=/home/username/mycode/python
PAGER=less
LESS=-R
...
当然env
最重要的还是设置环境变量,一般使用为:
$ env NAME=what-you-want-to-set
接下来使用echo $NAME就可以看到效果了。
如果希望运行程序的过程中,不受到原来环境的影响,可以使用-i
参数,直接开启一个全新的环境。
$ env -i program
通过-u
来取消某些设置的环境变量,比如:
$ env -u PWD
HOSTTYPE=x86_64
LANG=en_US.utf8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:
NAME=LOCALHOST
HOME=/home/username
USER=username
LOGNAME=username
SHELL=/usr/bin/bash
SHLVL=1
OLDPWD=/home/username/mycode/python
PAGER=less
LESS=-R
...
可以看到与env
相比,PWD
变量已经不存在了。
可以通过-C
来更改工作的路径。
$ pwd
/home/username/linux/scripts
$ env -C .. pwd
/home/username/linux
这个参数较多用在脚本中,-S
后面可以跟多个参数,如果没有这个参数,则只能跟一个参数,比如以脚本为例:
#!/usr/bin/env perl -w -T
会报错
/usr/bin/env: 'perl -w -T': No such file or directory
此时加上-S就可以解决了,如下:
#!/usr/bin/env -S perl -w -T
… note::
浮云一别后,流水十年间。
fdisk
是用于检查一个磁盘上分区信息最通用的命令。
fdisk
可以显示分区信息及一些细节信息,比如文件系统类型等。
设备的名称通常是/dev/sda、/dev/sdb 等。
对于以前的设备有可能还存在设备名为 /dev/hd* (IDE)的设备,这个设备逐步淘汰了。
fdisk
也可以用于创建并操控分区表信息,支持主任GPU、MBR、Sun、SGI和BSD。
块设备可以划分为一个或多个称为分区的逻辑磁盘。这种划分的记录会保存在分区表,通常位于磁盘的第 0 扇区。
fdisk的官方解释为:
fdisk - manipulate disk partition table
语法格式为:
$ fdisk [options] device
$ fdisk -l [device...]
其中一些常用的参数为:
-l
列出指定的外围设备的分区表状况-L, --color[=when]
:将输出颜色化,其中when可以指定为auto, never or always. 默认为 auto.这个也是我唯一推荐入门者使用的 命令,仅仅list显示出目前的系统分区。
万万不要输入fdisk执行其他操作,极易格式化硬盘,切记切记。
$ fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors # 磁盘空间及扇区信息
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: FAF37680-0ECE-4BE7-93FC-E87A8F2F6455
file鉴别大神
file的官方解释为:
file - determine file type
也就是说可以识别文件类型的意思,也可用来辨别一些文件的编码格式。它是通过查看文件的头部信息来获取文件类型,而不是像Windows通过扩展名来确定文件类型的,所以加不加后缀真的无所谓,谁会爱上谁,说起Windows吗,啥也不说了。
下面看几个比较使用的例子。
file后直接跟文件,得到如下所示信息
$ file book.pdf
delete.pdf: PDF document, version 1.3
? file book
delete: PDF document, version 1.3
可以看出加不加后缀都是没有关系的。
$ file -b book.pdf
PDF document, version 1.3
加上-b
参数,是brief
的含义,将只显示文件辨识结果,不显示文件名称了,这个其实对于很多文件而言,不是很友好。
$ file -i delete.pdf
delete.pdf: application/pdf; charset=binary
加上-i
参数,是mime
类型的含义,我也不懂是啥意思,但是我能刚方便地读懂我想知道的文件类型的含义。这就够了,不是吗,毕竟我们是来是用file
命令的。
$ cat hello.txt
sunset.jpg
$ file -f hello.txt
sunset.jpg: JPEG image data, JFIF standard 1.01
这个咋听着这么拗口,其实很简单,其实并不难
,加上·-f·参数,是file-from
类型的含义,到底是几个意思呢,也就是你想查看文件的类型信息的文件名在一个文件里面,从这个文件里面读取文件的信息。
$ file -F " === " sunset.jpg
sunset.jpg === JPEG image data, JFIF standard 1.01
这个功能说实话,没搞明白有什么作用,默认的:感觉挺好用的,当然这个应该属于定制型的,就是默认替换掉一些提示信息。
$ file a.jpg
a.jpg: symbolic link to `sunset.jpg'
$ file -L a.jpg
a.jpg: JPEG image data, JFIF standard 1.01
默认情况下,如果没有-L
参数,只能得到这个文件是软链接的信息,如果加上这个参数,就能看到源文件的文件信息,这个功能还是很赞的。
find
命令用来在指定目录下查找文件,功能相当之强大。
官方定义为:
find - search for files in a directory hierarchy
Linux的哲学是一切皆文件,那么find的使命就是一切皆可查。
使用语法为:
$ find [-H] [-L] [-P] [-D debugopts] [-Olevel] [path...] [expression]
比较常用的几个参数为:
-exec <执行指令>
:假设find指令的回传值为True,就执行该指令;-size <文件大小>
:查找符合指定的文件大小的文件;-mtime <24小时>
:查找在指定时间曾被更改过的文件或目录,单位以24小时计算;-type <文件类型>
:只寻找符合指定的文件类型的文件;如果使用该命令时,不设置任何参数,则find
命令将在当前目录下查找子目录与文件,并且将查找到的子目录和文件全部进行显示。
$ ls -l
total 310M
-rw-rw-r-- 1 user user 10M Mar 21 20:01 a
drwxrwxr-x 2 user user 22 Mar 21 20:01 aa
-rw-rw-r-- 1 user user 100M Mar 21 20:01 b
-rw-rw-r-- 1 user user 200M Mar 21 20:01 c
$ find
.
./a
./b
./c
./test
通过-size大小来查找文件
$ find . -size -100M
.
./a
./aa
$ find . -size 100M
./b
$ find . -size +100M
./c
./aa/d
可以通过参数-mtime来查找文件的修改时间,比如如下可以查找当前目录下最近60天没有被修改的文件。
$ find . -mtime +60
# 最近2天以内未修改
$ find . –mtime -2
我经常把 find
命令和他的选项 exec
一起使用,比如我想查找一个目录中的所有文件并将其更改其权限。可以通过以下简单命令完成:
$ find /path/ -type f -exec chmod 644 {} \;
这个命令会递归搜索指定目录内/path/下的所有文件,并对找到的文件执行 chmod
命令。—
**free
**这个命令在Linux系统监控的工具里面,算是使用的比较多的一个。
使用_man
_查看可知,官方含义为:
Display amount of free and used memory in the system
也就是显示可用、易用的系统内存,它直接读取/proc/meminfo
文件。
先看下不加任何参数的时候,free
的效果:
$ free
total used free shared buff/cache available
Mem: 32664832 15667736 674136 464892 16322960 15803156
Swap: 16449532 3039756 13409776
看起来很多的样子,但是不直观,我比较喜欢加上-h参数。
-h参数,跟前面的df
等命令类似,此处的h表示_human being_的含义方便人类阅读。 除了这个还有_-b,-k,-m,-g_,含义分别为按照_字节、KB、MB、GB_的格式来显示。
$ free -h
total used free shared buff/cache available
Mem: 31G 14G 655M 453M 15G 15G
Swap: 15G 2.9G 12G
Wow,此时的显示简直好简洁。
说下其中的含义:
total : 表示总的物理内存大小,比如上面的就表示31GB的内存
used :表示已经使用的内存大小,比如上面的就是使用了14GB
free :表示可用多少
shared:表示多个进程共享的内存大小
buff/cache:表示磁盘缓存的大小,这里有两个方面,buff
和cache
,两个的含义不同
available:当然含义为可用的内存容量大小
还有一个比较常用的就是,如果你希望过一段时间就看下free
的情况,OK,使用参数-s
,后面跟的单位是秒,也就是每个几秒,统计一下使用的内存情况,比如我们每个2s,显示一下
$ free -s 2
total used free shared buff/cache available
Mem: 32664832 15668528 670964 464892 16325340 15802360
Swap: 16449532 3039756 13409776
total used free shared buff/cache available
Mem: 32664832 15669760 669724 464892 16325348 15801124
Swap: 16449532 3039756 13409776
total used free shared buff/cache available
Mem: 32664832 15670220 669248 464892 16325364 15800652
Swap: 16449532 3039756 13409776
total used free shared buff/cache available
Mem: 32664832 15669264 670204 464892 16325364 15801624
Swap: 16449532 3039756 13409776
$ cat /proc/meminfo
其实free
读取的就是这个文件的某些信息,可以通过同步监控这个文件来check free
的状态。
Linux grep
命令用于查找文件里符合条件的字符串。
官方定义为:
grep
,egrep
,fgrep
- print lines matching a pattern
grep支持正则表达式,是一个强大的文本搜索工具。
语法也挺复杂,因为功能确实很强大。
$ grep [OPTION...] PATTERNS [FILE...]
$ grep [OPTION...] -e PATTERNS ... [FILE...] # 使用egrep
$ grep [OPTION...] -f PATTERN_FILE ... [FILE...] # 使用fgrep
常用的参数为:
假定有如下3个文件,1个文件夹,内容如下:
a
This is a
Hello a
b
this is b
Hello b
c
This is c
Hello c
d/d
This is d
Hello d
在当前目录搜索包含is字符串,可以看到**a/b/c**三个文件均有输出,而d因为是目录,暂时无输出。
$ grep is *
a:This is a
b:this is b
c:This is c
grep: d: Is a directory
与其他命令类似,增加-r
参数,递归搜索
$ grep -r is *
a:This is a
b:this is b
c:This is c
d/d:This is d
在某些情况下,或许正想找到不包含某些字符串的内容,如下:
$ grep -rv is *
a:Hello a
b:Hello b
c:Hello c
d/d:Hello d
此时可以看到,不包含is的内容显示了出来。
而某些情况下,或许我们希望找到不区分大小写的内容,比如对于This/this而言:
$ grep -r This *
a:This is a
c:This is c
d/d:This is d
$ grep -ri This *
a:This is a
b:this is b
c:This is c
d/d:This is d
可以看到此时有可能笔误,或者其他原因的b文件已经被找到了。
如果文件内容比较多,此时显示内容在哪一行,是很重要的,加上-n
参数既可解决。
$ grep -rn This *
a:1:This is a
c:1:This is c
d/d:1:This is d
… _linux-beginner-gunzip:
官方的定义为:
gzip, gunzip, zcat – compression/decompression tool using Lempel-Ziv coding (LZ77)
参考gunzip命令
使用的方法为:
$ uname [OPTION]...
常用的一些选项为:
-a, --all
:打印全部的信息-s, --kernel-name
:打印内核名-n, --nodename
:打印网络节点hostnme,即主机名-r, --kernel-release
:打印内核发行版-v, --kernel-version
:打印内核版本-m, --machine
:打印机器的硬件名字-p, --processor
:打印processor或者unknown-i, --hardware-platform
:打印硬件平台或者“unknown”-o, --operating-system
:打印操作系统$ unzip [-cflptuvz][-agCjLMnoqsVX][-P <密码>][.zip文件][文件][-d <目录>][-x <文件>] 或 unzip [-Z]
参数:
查看压缩文件中包含的文件:
# unzip -l abc.zip
Archive: abc.zip
Length Date Time Name
94618 05-21-10 20:44 a11.jpg
202001 05-21-10 20:44 a22.jpg
16 05-22-10 15:01 11.txt
46468 05-23-10 10:30 w456.JPG
140085 03-14-10 21:49 my.asp
483188 5 files
-v 参数用于查看压缩文件目录信息,但是不解压该文件。
# unzip -v abc.zip
Archive: abc.zip
Length Method Size Ratio Date Time CRC-32 Name
94618 Defl:N 93353 1% 05-21-10 20:44 9e661437 a11.jpg
202001 Defl:N 201833 0% 05-21-10 20:44 1da462eb a22.jpg
16 Stored 16 0% 05-22-10 15:01 ae8a9910 ? +-|¥+-? (11).txt
46468 Defl:N 39997 14% 05-23-10 10:30 962861f2 w456.JPG
140085 Defl:N 36765 74% 03-14-10 21:49 836fcc3f my.asp
483188 371964 23% 5 files
UNZIP(1) General Commands Manual UNZIP(1)
NAME
unzip - list, test and extract compressed files in a ZIP archive
SYNOPSIS
unzip [-Z] [-cflptTuvz[abjnoqsCDKLMUVWX$/:^]] file[.zip] [file(s) …] [-x xfile(s) …] [-d exdir]
DESCRIPTION
unzip will list, test, or extract files from a ZIP archive, commonly found on MS-DOS systems. The default behavior (with
no options) is to extract into the current directory (and subdirectories below it) all files from the specified ZIP ar‐
chive. A companion program, zip(1), creates ZIP archives; both programs are compatible with archives created by PKWARE’s
PKZIP and PKUNZIP for MS-DOS, but in many cases the program options or default behaviors differ.
ARGUMENTS
file[.zip]
Path of the ZIP archive(s). If the file specification is a wildcard, each matching file is processed in an order
determined by the operating system (or file system). Only the filename can be a wildcard; the path itself cannot.
Wildcard expressions are similar to those supported in commonly used Unix shells (sh, ksh, csh) and may contain:
* matches a sequence of 0 or more characters
? matches exactly 1 character
[...] matches any single character found inside the brackets; ranges are specified by a beginning character, a
hyphen, and an ending character. If an exclamation point or a caret (`!' or `^') follows the left bracket,
then the range of characters within the brackets is complemented (that is, anything except the characters
inside the brackets is considered a match). To specify a verbatim left bracket, the three-character se‐
quence ``[[]'' has to be used.
(Be sure to quote any character that might otherwise be interpreted or modified by the operating system, particu‐
larly under Unix and VMS.) If no matches are found, the specification is assumed to be a literal filename; and if
that also fails, the suffix .zip is appended. Note that self-extracting ZIP files are supported, as with any
other ZIP archive; just specify the .exe suffix (if any) explicitly.
[file(s)]
An optional list of archive members to be processed, separated by spaces. (VMS versions compiled with VMSCLI de‐
fined must delimit files with commas instead. See -v in OPTIONS below.) Regular expressions (wildcards) may be
used to match multiple members; see above. Again, be sure to quote expressions that would otherwise be expanded
or modified by the operating system.
[-x xfile(s)]
An optional list of archive members to be excluded from processing. Since wildcard characters normally match
(`/') directory separators (for exceptions see the option -W), this option may be used to exclude any files that
are in subdirectories. For example, ``unzip foo *.[ch] -x */*'' would extract all C source files in the main di‐
rectory, but none in any subdirectories. Without the -x option, all C source files in all directories within the
zipfile would be extracted.
[-d exdir]
An optional directory to which to extract files. By default, all files and subdirectories are recreated in the
current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission
to write to the directory). This option need not appear at the end of the command line; it is also accepted be‐
fore the zipfile specification (with the normal options), immediately after the zipfile specification, or between
the file(s) and the -x option. The option and directory may be concatenated without any white space between them,
but note that this may cause normal shell behavior to be suppressed. In particular, ``-d ~'' (tilde) is expanded
by Unix C shells into the name of the user's home directory, but ``-d~'' is treated as a literal subdirectory
``~'' of the current directory.
OPTIONS
Note that, in order to support obsolescent hardware, unzip’s usage screen is limited to 22 or 23 lines and should there‐
fore be considered only a reminder of the basic unzip syntax rather than an exhaustive list of all possible flags. The
exhaustive list follows:
-Z zipinfo(1) mode. If the first option on the command line is -Z, the remaining options are taken to be zipinfo(1)
options. See the appropriate manual page for a description of these options.
-A [OS/2, Unix DLL] print extended help for the DLL's programming interface (API).
-c extract files to stdout/screen (``CRT''). This option is similar to the -p option except that the name of each
file is printed as it is extracted, the -a option is allowed, and ASCII-EBCDIC conversion is automatically per‐
formed if appropriate. This option is not listed in the unzip usage screen.
-f freshen existing files, i.e., extract only those files that already exist on disk and that are newer than the disk
copies. By default unzip queries before overwriting, but the -o option may be used to suppress the queries. Note
that under many operating systems, the TZ (timezone) environment variable must be set correctly in order for -f
and -u to work properly (under Unix the variable is usually set automatically). The reasons for this are somewhat
subtle but have to do with the differences between DOS-format file times (always local time) and Unix-format times
(always in GMT/UTC) and the necessity to compare the two. A typical TZ value is ``PST8PDT'' (US Pacific time with
automatic adjustment for Daylight Savings Time or ``summer time'').
-l list archive files (short format). The names, uncompressed file sizes and modification dates and times of the
specified files are printed, along with totals for all files specified. If UnZip was compiled with OS2_EAS de‐
fined, the -l option also lists columns for the sizes of stored OS/2 extended attributes (EAs) and OS/2 access
control lists (ACLs). In addition, the zipfile comment and individual file comments (if any) are displayed. If a
file was archived from a single-case file system (for example, the old MS-DOS FAT file system) and the -L option
was given, the filename is converted to lowercase and is prefixed with a caret (^).
-p extract files to pipe (stdout). Nothing but the file data is sent to stdout, and the files are always extracted
in binary format, just as they are stored (no conversions).
-t test archive files. This option extracts each specified file in memory and compares the CRC (cyclic redundancy
check, an enhanced checksum) of the expanded file with the original file's stored CRC value.
-T [most OSes] set the timestamp on the archive(s) to that of the newest file in each one. This corresponds to zip's
-go option except that it can be used on wildcard zipfiles (e.g., ``unzip -T \*.zip'') and is much faster.
-u update existing files and create new ones if needed. This option performs the same function as the -f option, ex‐
tracting (with query) files that are newer than those with the same name on disk, and in addition it extracts
those files that do not already exist on disk. See -f above for information on setting the timezone properly.
-v list archive files (verbose format) or show diagnostic version info. This option has evolved and now behaves as
both an option and a modifier. As an option it has two purposes: when a zipfile is specified with no other op‐
tions, -v lists archive files verbosely, adding to the basic -l info the compression method, compressed size, com‐
pression ratio and 32-bit CRC. In contrast to most of the competing utilities, unzip removes the 12 additional
header bytes of encrypted entries from the compressed size numbers. Therefore, compressed size and compression
ratio figures are independent of the entry's encryption status and show the correct compression performance. (The
complete size of the encrypted compressed data stream for zipfile entries is reported by the more verbose zip‐
info(1) reports, see the separate manual.) When no zipfile is specified (that is, the complete command is simply
``unzip -v''), a diagnostic screen is printed. In addition to the normal header with release date and version,
unzip lists the home Info-ZIP ftp site and where to find a list of other ftp and non-ftp sites; the target operat‐
ing system for which it was compiled, as well as (possibly) the hardware on which it was compiled, the compiler
and version used, and the compilation date; any special compilation options that might affect the program's opera‐
tion (see also DECRYPTION below); and any options stored in environment variables that might do the same (see EN‐
VIRONMENT OPTIONS below). As a modifier it works in conjunction with other options (e.g., -t) to produce more
verbose or debugging output; this is not yet fully implemented but will be in future releases.
-z display only the archive comment.
MODIFIERS
-a convert text files. Ordinarily all files are extracted exactly as they are stored (as binary'' files). The -a option causes files identified by zip as text files (those with the `t' label in zipinfo listings, rather than `b') to be automatically extracted as such, converting line endings, end-of-file characters and the character set itself as necessary. (For example, Unix files use line feeds (LFs) for end-of-line (EOL) and have no end-of-file (EOF) marker; Macintoshes use carriage returns (CRs) for EOLs; and most PC operating systems use CR+LF for EOLs and control-Z for EOF. In addition, IBM mainframes and the Michigan Terminal System use EBCDIC rather than the more common ASCII character set, and NT supports Unicode.) Note that zip's identification of text files is by no means perfect; some
text’’ files may actually be binary and vice versa. unzip therefore prints [text]'' or
[binary]‘’ as a visual check for each file it extracts when using the -a option. The -aa option forces all
files to be extracted as text, regardless of the supposed file type. On VMS, see also -S.
-b [general] treat all files as binary (no text conversions). This is a shortcut for ---a.
-b [Tandem] force the creation files with filecode type 180 ('C') when extracting Zip entries marked as "text". (On
Tandem, -a is enabled by default, see above).
-b [VMS] auto-convert binary files (see -a above) to fixed-length, 512-byte record format. Doubling the option (-bb)
forces all files to be extracted in this format. When extracting to standard output (-c or -p option in effect),
the default conversion of text record delimiters is disabled for binary (-b) resp. all (-bb) files.
-B [when compiled with UNIXBACKUP defined] save a backup copy of each overwritten file. The backup file is gets the
name of the target file with a tilde and optionally a unique sequence number (up to 5 digits) appended. The se‐
quence number is applied whenever another file with the original name plus tilde already exists. When used to‐
gether with the "overwrite all" option -o, numbered backup files are never created. In this case, all backup files
are named as the original file with an appended tilde, existing backup files are deleted without notice. This
feature works similarly to the default behavior of emacs(1) in many locations.
Example: the old copy of ``foo'' is renamed to ``foo~''.
Warning: Users should be aware that the -B option does not prevent loss of existing data under all circumstances.
For example, when unzip is run in overwrite-all mode, an existing ``foo~'' file is deleted before unzip attempts
to rename ``foo'' to ``foo~''. When this rename attempt fails (because of a file locks, insufficient privileges,
or ...), the extraction of ``foo~'' gets cancelled, but the old backup file is already lost. A similar scenario
takes place when the sequence number range for numbered backup files gets exhausted (99999, or 65535 for 16-bit
systems). In this case, the backup file with the maximum sequence number is deleted and replaced by the new
backup version without notice.
-C use case-insensitive matching for the selection of archive entries from the command-line list of extract selection
patterns. unzip's philosophy is ``you get what you ask for'' (this is also responsible for the -L/-U change; see
the relevant options below). Because some file systems are fully case-sensitive (notably those under the Unix op‐
erating system) and because both ZIP archives and unzip itself are portable across platforms, unzip's default be‐
havior is to match both wildcard and literal filenames case-sensitively. That is, specifying ``makefile'' on the
command line will only match ``makefile'' in the archive, not ``Makefile'' or ``MAKEFILE'' (and similarly for
wildcard specifications). Since this does not correspond to the behavior of many other operating/file systems
(for example, OS/2 HPFS, which preserves mixed case but is not sensitive to it), the -C option may be used to
force all filename matches to be case-insensitive. In the example above, all three files would then match ``make‐
file'' (or ``make*'', or similar). The -C option affects file specs in both the normal file list and the ex‐
cluded-file list (xlist).
Please note that the -C option does neither affect the search for the zipfile(s) nor the matching of archive en‐
tries to existing files on the extraction path. On a case-sensitive file system, unzip will never try to over‐
write a file ``FOO'' when extracting an entry ``foo''!
-D skip restoration of timestamps for extracted items. Normally, unzip tries to restore all meta-information for ex‐
tracted items that are supplied in the Zip archive (and do not require privileges or impose a security risk). By
specifying -D, unzip is told to suppress restoration of timestamps for directories explicitly created from Zip ar‐
chive entries. This option only applies to ports that support setting timestamps for directories (currently
ATheOS, BeOS, MacOS, OS/2, Unix, VMS, Win32, for other unzip ports, -D has no effect). The duplicated option -DD
forces suppression of timestamp restoration for all extracted entries (files and directories). This option re‐
sults in setting the timestamps for all extracted entries to the current time.
On VMS, the default setting for this option is -D for consistency with the behaviour of BACKUP: file timestamps
are restored, timestamps of extracted directories are left at the current time. To enable restoration of direc‐
tory timestamps, the negated option --D should be specified. On VMS, the option -D disables timestamp restoration
for all extracted Zip archive items. (Here, a single -D on the command line combines with the default -D to do
what an explicit -DD does on other systems.)
-E [MacOS only] display contents of MacOS extra field during restore operation.
-F [Acorn only] suppress removal of NFS filetype extension from stored filenames.
-F [non-Acorn systems supporting long filenames with embedded commas, and only if compiled with ACORN_FTYPE_NFS de‐
fined] translate filetype information from ACORN RISC OS extra field blocks into a NFS filetype extension and ap‐
pend it to the names of the extracted files. (When the stored filename appears to already have an appended NFS
filetype extension, it is replaced by the info from the extra field.)
-i [MacOS only] ignore filenames stored in MacOS extra fields. Instead, the most compatible filename stored in the
generic part of the entry's header is used.
-j junk paths. The archive's directory structure is not recreated; all files are deposited in the extraction direc‐
tory (by default, the current one).
-J [BeOS only] junk file attributes. The file's BeOS file attributes are not restored, just the file's data.
-J [MacOS only] ignore MacOS extra fields. All Macintosh specific info is skipped. Data-fork and resource-fork are
restored as separate files.
-K [AtheOS, BeOS, Unix only] retain SUID/SGID/Tacky file attributes. Without this flag, these attribute bits are
cleared for security reasons.
-L convert to lowercase any filename originating on an uppercase-only operating system or file system. (This was un‐
zip's default behavior in releases prior to 5.11; the new default behavior is identical to the old behavior with
the -U option, which is now obsolete and will be removed in a future release.) Depending on the archiver, files
archived under single-case file systems (VMS, old MS-DOS FAT, etc.) may be stored as all-uppercase names; this can
be ugly or inconvenient when extracting to a case-preserving file system such as OS/2 HPFS or a case-sensitive one
such as under Unix. By default unzip lists and extracts such filenames exactly as they're stored (excepting trun‐
cation, conversion of unsupported characters, etc.); this option causes the names of all files from certain sys‐
tems to be converted to lowercase. The -LL option forces conversion of every filename to lowercase, regardless of
the originating file system.
-M pipe all output through an internal pager similar to the Unix more(1) command. At the end of a screenful of out‐
put, unzip pauses with a ``--More--'' prompt; the next screenful may be viewed by pressing the Enter (Return) key
or the space bar. unzip can be terminated by pressing the ``q'' key and, on some systems, the Enter/Return key.
Unlike Unix more(1), there is no forward-searching or editing capability. Also, unzip doesn't notice if long
lines wrap at the edge of the screen, effectively resulting in the printing of two or more lines and the likeli‐
hood that some text will scroll off the top of the screen before being viewed. On some systems the number of
available lines on the screen is not detected, in which case unzip assumes the height is 24 lines.
-n never overwrite existing files. If a file already exists, skip the extraction of that file without prompting. By
default unzip queries before extracting any file that already exists; the user may choose to overwrite only the
current file, overwrite all files, skip extraction of the current file, skip extraction of all existing files, or
rename the current file.
-N [Amiga] extract file comments as Amiga filenotes. File comments are created with the -c option of zip(1), or with
the -N option of the Amiga port of zip(1), which stores filenotes as comments.
-o overwrite existing files without prompting. This is a dangerous option, so use it with care. (It is often used
with -f, however, and is the only way to overwrite directory EAs under OS/2.)
-P password
use password to decrypt encrypted zipfile entries (if any). THIS IS INSECURE! Many multi-user operating systems
provide ways for any user to see the current command line of any other user; even on stand-alone systems there is
always the threat of over-the-shoulder peeking. Storing the plaintext password as part of a command line in an
automated script is even worse. Whenever possible, use the non-echoing, interactive prompt to enter passwords.
(And where security is truly important, use strong encryption such as Pretty Good Privacy instead of the rela‐
tively weak encryption provided by standard zipfile utilities.)
-q perform operations quietly (-qq = even quieter). Ordinarily unzip prints the names of the files it's extracting
or testing, the extraction methods, any file or zipfile comments that may be stored in the archive, and possibly a
summary when finished with each archive. The -q[q] options suppress the printing of some or all of these mes‐
sages.
-s [OS/2, NT, MS-DOS] convert spaces in filenames to underscores. Since all PC operating systems allow spaces in
filenames, unzip by default extracts filenames with spaces intact (e.g., ``EA DATA. SF''). This can be awkward,
however, since MS-DOS in particular does not gracefully support spaces in filenames. Conversion of spaces to un‐
derscores can eliminate the awkwardness in some cases.
-S [VMS] convert text files (-a, -aa) into Stream_LF record format, instead of the text-file default, variable-length
record format. (Stream_LF is the default record format of VMS unzip. It is applied unless conversion (-a, -aa
and/or -b, -bb) is requested or a VMS-specific entry is processed.)
-U [UNICODE_SUPPORT only] modify or disable UTF-8 handling. When UNICODE_SUPPORT is available, the option -U forces
unzip to escape all non-ASCII characters from UTF-8 coded filenames as ``#Uxxxx'' (for UCS-2 characters, or
``#Lxxxxxx'' for unicode codepoints needing 3 octets). This option is mainly provided for debugging purpose when
the fairly new UTF-8 support is suspected to mangle up extracted filenames.
The option -UU allows to entirely disable the recognition of UTF-8 encoded filenames. The handling of filename
codings within unzip falls back to the behaviour of previous versions.
[old, obsolete usage] leave filenames uppercase if created under MS-DOS, VMS, etc. See -L above.
-V retain (VMS) file version numbers. VMS files can be stored with a version number, in the format file.ext;##. By
default the ``;##'' version numbers are stripped, but this option allows them to be retained. (On file systems
that limit filenames to particularly short lengths, the version numbers may be truncated or stripped regardless of
this option.)
-W [only when WILD_STOP_AT_DIR compile-time option enabled] modifies the pattern matching routine so that both `?'
(single-char wildcard) and `*' (multi-char wildcard) do not match the directory separator character `/'. (The
two-character sequence ``**'' acts as a multi-char wildcard that includes the directory separator in its matched
characters.) Examples:
"*.c" matches "foo.c" but not "mydir/foo.c"
"**.c" matches both "foo.c" and "mydir/foo.c"
"*/*.c" matches "bar/foo.c" but not "baz/bar/foo.c"
"??*/*" matches "ab/foo" and "abc/foo"
but not "a/foo" or "a/b/foo"
This modified behaviour is equivalent to the pattern matching style used by the shells of some of UnZip's sup‐
ported target OSs (one example is Acorn RISC OS). This option may not be available on systems where the Zip ar‐
chive's internal directory separator character `/' is allowed as regular character in native operating system
filenames. (Currently, UnZip uses the same pattern matching rules for both wildcard zipfile specifications and
zip entry selection patterns in most ports. For systems allowing `/' as regular filename character, the -W option
would not work as expected on a wildcard zipfile specification.)
-X [VMS, Unix, OS/2, NT, Tandem] restore owner/protection info (UICs and ACL entries) under VMS, or user and group
info (UID/GID) under Unix, or access control lists (ACLs) under certain network-enabled versions of OS/2 (Warp
Server with IBM LAN Server/Requester 3.0 to 5.0; Warp Connect with IBM Peer 1.0), or security ACLs under Windows
NT. In most cases this will require special system privileges, and doubling the option (-XX) under NT instructs
unzip to use privileges for extraction; but under Unix, for example, a user who belongs to several groups can re‐
store files owned by any of those groups, as long as the user IDs match his or her own. Note that ordinary file
attributes are always restored--this option applies only to optional, extra ownership info available on some oper‐
ating systems. [NT's access control lists do not appear to be especially compatible with OS/2's, so no attempt is
made at cross-platform portability of access privileges. It is not clear under what conditions this would ever be
useful anyway.]
-Y [VMS] treat archived file name endings of ``.nnn'' (where ``nnn'' is a decimal number) as if they were VMS ver‐
sion numbers (``;nnn''). (The default is to treat them as file types.) Example:
"a.b.3" -> "a.b;3".
-$ [MS-DOS, OS/2, NT] restore the volume label if the extraction medium is removable (e.g., a diskette). Doubling
the option (-$$) allows fixed media (hard disks) to be labelled as well. By default, volume labels are ignored.
-/ extensions
[Acorn only] overrides the extension list supplied by Unzip$Ext environment variable. During extraction, filename
extensions that match one of the items in this extension list are swapped in front of the base name of the ex‐
tracted file.
-: [all but Acorn, VM/CMS, MVS, Tandem] allows to extract archive members into locations outside of the current ``
extraction root folder''. For security reasons, unzip normally removes ``parent dir'' path components (``../'')
from the names of extracted file. This safety feature (new for version 5.50) prevents unzip from accidentally
writing files to ``sensitive'' areas outside the active extraction folder tree head. The -: option lets unzip
switch back to its previous, more liberal behaviour, to allow exact extraction of (older) archives that used
``../'' components to create multiple directory trees at the level of the current extraction folder. This option
does not enable writing explicitly to the root directory (``/''). To achieve this, it is necessary to set the ex‐
traction target folder to root (e.g. -d / ). However, when the -: option is specified, it is still possible to
implicitly write to the root directory by specifying enough ``../'' path components within the zip archive. Use
this option with extreme caution.
-^ [Unix only] allow control characters in names of extracted ZIP archive entries. On Unix, a file name may contain
any (8-bit) character code with the two exception '/' (directory delimiter) and NUL (0x00, the C string termina‐
tion indicator), unless the specific file system has more restrictive conventions. Generally, this allows to em‐
bed ASCII control characters (or even sophisticated control sequences) in file names, at least on 'native' Unix
file systems. However, it may be highly suspicious to make use of this Unix "feature". Embedded control charac‐
ters in file names might have nasty side effects when displayed on screen by some listing code without sufficient
filtering. And, for ordinary users, it may be difficult to handle such file names (e.g. when trying to specify it
for open, copy, move, or delete operations). Therefore, unzip applies a filter by default that removes poten‐
tially dangerous control characters from the extracted file names. The -^ option allows to override this filter in
the rare case that embedded filename control characters are to be intentionally restored.
-2 [VMS] force unconditionally conversion of file names to ODS2-compatible names. The default is to exploit the des‐
tination file system, preserving case and extended file name characters on an ODS5 destination file system; and
applying the ODS2-compatibility file name filtering on an ODS2 destination file system.
ENVIRONMENT OPTIONS
unzip’s default behavior may be modified via options placed in an environment variable. This can be done with any op‐
tion, but it is probably most useful with the -a, -L, -C, -q, -o, or -n modifiers: make unzip auto-convert text files by
default, make it convert filenames from uppercase systems to lowercase, make it match names case-insensitively, make it
quieter, or make it always overwrite or never overwrite files as it extracts them. For example, to make unzip act as
quietly as possible, only reporting errors, one would use one of the following commands:
Unix Bourne shell:
UNZIP=-qq; export UNZIP
Unix C shell:
setenv UNZIP -qq
OS/2 or MS-DOS:
set UNZIP=-qq
VMS (quotes for lowercase):
define UNZIP_OPTS "-qq"
Environment options are, in effect, considered to be just like any other command-line options, except that they are ef‐
fectively the first options on the command line. To override an environment option, one may use the ``minus operator''
to remove it. For instance, to override one of the quiet-flags in the example above, use the command
unzip --q[other options] zipfile
The first hyphen is the normal switch character, and the second is a minus sign, acting on the q option. Thus the effect
here is to cancel one quantum of quietness. To cancel both quiet flags, two (or more) minuses may be used:
unzip -t--q zipfile
unzip ---qt zipfile
(the two are equivalent). This may seem awkward or confusing, but it is reasonably intuitive: just ignore the first hy‐
phen and go from there. It is also consistent with the behavior of Unix nice(1).
As suggested by the examples above, the default variable names are UNZIP_OPTS for VMS (where the symbol used to install
unzip as a foreign command would otherwise be confused with the environment variable), and UNZIP for all other operating
systems. For compatibility with zip(1), UNZIPOPT is also accepted (don't ask). If both UNZIP and UNZIPOPT are defined,
however, UNZIP takes precedence. unzip's diagnostic option (-v with no zipfile name) can be used to check the values of
all four possible unzip and zipinfo environment variables.
The timezone variable (TZ) should be set according to the local timezone in order for the -f and -u to operate correctly.
See the description of -f above for details. This variable may also be necessary to get timestamps of extracted files to
be set correctly. The WIN32 (Win9x/ME/NT4/2K/XP/2K3) port of unzip gets the timezone configuration from the registry,
assuming it is correctly set in the Control Panel. The TZ variable is ignored for this port.
DECRYPTION
Encrypted archives are fully supported by Info-ZIP software, but due to United States export restrictions, de-/encryption
support might be disabled in your compiled binary. However, since spring 2000, US export restrictions have been liber‐
ated, and our source archives do now include full crypt code. In case you need binary distributions with crypt support
enabled, see the file ``WHERE’’ in any Info-ZIP source or binary distribution for locations both inside and outside the
US.
Some compiled versions of unzip may not support decryption. To check a version for crypt support, either attempt to test
or extract an encrypted archive, or else check unzip's diagnostic screen (see the -v option above) for ``[decryption]''
as one of the special compilation options.
As noted above, the -P option may be used to supply a password on the command line, but at a cost in security. The pre‐
ferred decryption method is simply to extract normally; if a zipfile member is encrypted, unzip will prompt for the pass‐
word without echoing what is typed. unzip continues to use the same password as long as it appears to be valid, by test‐
ing a 12-byte header on each file. The correct password will always check out against the header, but there is a
1-in-256 chance that an incorrect password will as well. (This is a security feature of the PKWARE zipfile format; it
helps prevent brute-force attacks that might otherwise gain a large speed advantage by testing only the header.) In the
case that an incorrect password is given but it passes the header test anyway, either an incorrect CRC will be generated
for the extracted data or else unzip will fail during the extraction because the ``decrypted'' bytes do not constitute a
valid compressed data stream.
If the first password fails the header check on some file, unzip will prompt for another password, and so on until all
files are extracted. If a password is not known, entering a null password (that is, just a carriage return or ``Enter'')
is taken as a signal to skip all further prompting. Only unencrypted files in the archive(s) will thereafter be ex‐
tracted. (In fact, that's not quite true; older versions of zip(1) and zipcloak(1) allowed null passwords, so unzip
checks each encrypted file to see if the null password works. This may result in ``false positives'' and extraction er‐
rors, as noted above.)
Archives encrypted with 8-bit passwords (for example, passwords with accented European characters) may not be portable
across systems and/or other archivers. This problem stems from the use of multiple encoding methods for such characters,
including Latin-1 (ISO 8859-1) and OEM code page 850. DOS PKZIP 2.04g uses the OEM code page; Windows PKZIP 2.50 uses
Latin-1 (and is therefore incompatible with DOS PKZIP); Info-ZIP uses the OEM code page on DOS, OS/2 and Win3.x ports but
ISO coding (Latin-1 etc.) everywhere else; and Nico Mak's WinZip 6.x does not allow 8-bit passwords at all. UnZip 5.3
(or newer) attempts to use the default character set first (e.g., Latin-1), followed by the alternate one (e.g., OEM code
page) to test passwords. On EBCDIC systems, if both of these fail, EBCDIC encoding will be tested as a last resort.
(EBCDIC is not tested on non-EBCDIC systems, because there are no known archivers that encrypt using EBCDIC encoding.)
ISO character encodings other than Latin-1 are not supported. The new addition of (partially) Unicode (resp. UTF-8)
support in UnZip 6.0 has not yet been adapted to the encryption password handling in unzip. On systems that use UTF-8 as
native character encoding, unzip simply tries decryption with the native UTF-8 encoded password; the built-in attempts to
check the password in translated encoding have not yet been adapted for UTF-8 support and will consequently fail.
EXAMPLES
To use unzip to extract all members of the archive letters.zip into the current directory and subdirectories below it,
creating any subdirectories as necessary:
unzip letters
To extract all members of letters.zip into the current directory only:
unzip -j letters
To test letters.zip, printing only a summary message indicating whether the archive is OK or not:
unzip -tq letters
To test all zipfiles in the current directory, printing only the summaries:
unzip -tq \*.zip
(The backslash before the asterisk is only required if the shell expands wildcards, as in Unix; double quotes could have
been used instead, as in the source examples below.) To extract to standard output all members of letters.zip whose
names end in .tex, auto-converting to the local end-of-line convention and piping the output into more(1):
unzip -ca letters \*.tex | more
To extract the binary file paper1.dvi to standard output and pipe it to a printing program:
unzip -p articles paper1.dvi | dvips
To extract all FORTRAN and C source files--*.f, *.c, *.h, and Makefile--into the /tmp directory:
unzip source.zip "*.[fch]" Makefile -d /tmp
(the double quotes are necessary only in Unix and only if globbing is turned on). To extract all FORTRAN and C source
files, regardless of case (e.g., both *.c and *.C, and any makefile, Makefile, MAKEFILE or similar):
unzip -C source.zip "*.[fch]" makefile -d /tmp
To extract any such files but convert any uppercase MS-DOS or VMS names to lowercase and convert the line-endings of all
of the files to the local standard (without respect to any files that might be marked ``binary''):
unzip -aaCL source.zip "*.[fch]" makefile -d /tmp
To extract only newer versions of the files already in the current directory, without querying (NOTE: be careful of un‐
zipping in one timezone a zipfile created in another--ZIP archives other than those created by Zip 2.1 or later contain
no timezone information, and a ``newer'' file from an eastern timezone may, in fact, be older):
unzip -fo sources
To extract newer versions of the files already in the current directory and to create any files not already there (same
caveat as previous example):
unzip -uo sources
To display a diagnostic screen showing which unzip and zipinfo options are stored in environment variables, whether de‐
cryption support was compiled in, the compiler with which unzip was compiled, etc.:
unzip -v
In the last five examples, assume that UNZIP or UNZIP_OPTS is set to -q. To do a singly quiet listing:
unzip -l file.zip
To do a doubly quiet listing:
unzip -ql file.zip
(Note that the ``.zip'' is generally not necessary.) To do a standard listing:
unzip --ql file.zip
or
unzip -l-q file.zip
or
unzip -l--q file.zip
(Extra minuses in options don't hurt.)
TIPS
The current maintainer, being a lazy sort, finds it very useful to define a pair of aliases: tt for unzip -tq'' and ii for
unzip -Z’’ (or zipinfo''). One may then simply type
tt zipfile’’ to test an archive, something that is worth
making a habit of doing. With luck unzip will report ``No errors detected in compressed data of zipfile.zip,‘’ after
which one may breathe a sigh of relief.
The maintainer also finds it useful to set the UNZIP environment variable to ``-aL'' and is tempted to add ``-C'' as
well. His ZIPINFO variable is set to ``-z''.
DIAGNOSTICS
The exit status (or error level) approximates the exit codes defined by PKWARE and takes on the following values, except
under VMS:
0 normal; no errors or warnings detected.
1 one or more warning errors were encountered, but processing completed successfully anyway. This includes
zipfiles where one or more files was skipped due to unsupported compression method or encryption with an
unknown password.
2 a generic error in the zipfile format was detected. Processing may have completed successfully anyway;
some broken zipfiles created by other archivers have simple work-arounds.
3 a severe error in the zipfile format was detected. Processing probably failed immediately.
4 unzip was unable to allocate memory for one or more buffers during program initialization.
5 unzip was unable to allocate memory or unable to obtain a tty to read the decryption password(s).
6 unzip was unable to allocate memory during decompression to disk.
7 unzip was unable to allocate memory during in-memory decompression.
8 [currently not used]
9 the specified zipfiles were not found.
10 invalid options were specified on the command line.
11 no matching files were found.
50 the disk is (or was) full during extraction.
51 the end of the ZIP archive was encountered prematurely.
80 the user aborted unzip prematurely with control-C (or similar)
81 testing or extraction of one or more files failed due to unsupported compression methods or unsupported de‐
cryption.
82 no files were found due to bad decryption password(s). (If even one file is successfully processed, how‐
ever, the exit status is 1.)
VMS interprets standard Unix (or PC) return values as other, scarier-looking things, so unzip instead maps them into VMS-
style status codes. The current mapping is as follows: 1 (success) for normal exit, 0x7fff0001 for warning errors, and
(0x7fff000? + 16*normal_unzip_exit_status) for all other errors, where the `?' is 2 (error) for unzip values 2, 9-11 and
80-82, and 4 (fatal error) for the remaining ones (3-8, 50, 51). In addition, there is a compilation option to expand
upon this behavior: defining RETURN_CODES results in a human-readable explanation of what the error status means.
BUGS
Multi-part archives are not yet supported, except in conjunction with zip. (All parts must be concatenated together in
order, and then zip -F'' (for zip 2.x) or
zip -FF’’ (for zip 3.x) must be performed on the concatenated archive in
order to fix'' it. Also, zip 3.0 and later can combine multi-part (split) archives into a combined single-file archive using
zip -s- inarchive -O outarchive’'. See the zip 3 manual page for more information.) This will definitely be
corrected in the next major release.
Archives read from standard input are not yet supported, except with funzip (and then only the first member of the ar‐
chive can be extracted).
Archives encrypted with 8-bit passwords (e.g., passwords with accented European characters) may not be portable across
systems and/or other archivers. See the discussion in DECRYPTION above.
unzip's -M (``more'') option tries to take into account automatic wrapping of long lines. However, the code may fail to
detect the correct wrapping locations. First, TAB characters (and similar control sequences) are not taken into account,
they are handled as ordinary printable characters. Second, depending on the actual system / OS port, unzip may not de‐
tect the true screen geometry but rather rely on "commonly used" default dimensions. The correct handling of tabs would
require the implementation of a query for the actual tabulator setup on the output console.
Dates, times and permissions of stored directories are not restored except under Unix. (On Windows NT and successors,
timestamps are now restored.)
[MS-DOS] When extracting or testing files from an archive on a defective floppy diskette, if the ``Fail'' option is cho‐
sen from DOS's ``Abort, Retry, Fail?'' message, older versions of unzip may hang the system, requiring a reboot. This
problem appears to be fixed, but control-C (or control-Break) can still be used to terminate unzip.
Under DEC Ultrix, unzip would sometimes fail on long zipfiles (bad CRC, not always reproducible). This was apparently
due either to a hardware bug (cache memory) or an operating system bug (improper handling of page faults?). Since Ultrix
has been abandoned in favor of Digital Unix (OSF/1), this may not be an issue anymore.
[Unix] Unix special files such as FIFO buffers (named pipes), block devices and character devices are not restored even
if they are somehow represented in the zipfile, nor are hard-linked files relinked. Basically the only file types re‐
stored by unzip are regular files, directories and symbolic (soft) links.
[OS/2] Extended attributes for existing directories are only updated if the -o (``overwrite all'') option is given. This
is a limitation of the operating system; because directories only have a creation time associated with them, unzip has no
way to determine whether the stored attributes are newer or older than those on disk. In practice this may mean a two-
pass approach is required: first unpack the archive normally (with or without freshening/updating existing files), then
overwrite just the directory entries (e.g., ``unzip -o foo */'').
[VMS] When extracting to another directory, only the [.foo] syntax is accepted for the -d option; the simple Unix foo
syntax is silently ignored (as is the less common VMS foo.dir syntax).
[VMS] When the file being extracted already exists, unzip's query only allows skipping, overwriting or renaming; there
should additionally be a choice for creating a new version of the file. In fact, the ``overwrite'' choice does create a
new version; the old version is not overwritten or deleted.
SEE ALSO
funzip(1), zip(1), zipcloak(1), zipgrep(1), zipinfo(1), zipnote(1), zipsplit(1)
URL
The Info-ZIP home page is currently at
http://www.info-zip.org/pub/infozip/
or
ftp://ftp.info-zip.org/pub/infozip/ .
AUTHORS
The primary Info-ZIP authors (current semi-active members of the Zip-Bugs workgroup) are: Ed Gordon (Zip, general main‐
tenance, shared code, Zip64, Win32, Unix, Unicode); Christian Spieler (UnZip maintenance coordination, VMS, MS-DOS,
Win32, shared code, general Zip and UnZip integration and optimization); Onno van der Linden (Zip); Mike White (Win32,
Windows GUI, Windows DLLs); Kai Uwe Rommel (OS/2, Win32); Steven M. Schweda (VMS, Unix, support of new features); Paul
Kienitz (Amiga, Win32, Unicode); Chris Herborth (BeOS, QNX, Atari); Jonathan Hudson (SMS/QDOS); Sergio Monesi (Acorn RISC
OS); Harald Denker (Atari, MVS); John Bush (Solaris, Amiga); Hunter Goatley (VMS, Info-ZIP Site maintenance); Steve Sal‐
isbury (Win32); Steve Miller (Windows CE GUI), Johnny Lee (MS-DOS, Win32, Zip64); and Dave Smith (Tandem NSK).
The following people were former members of the Info-ZIP development group and provided major contributions to key parts
of the current code: Greg ``Cave Newt'' Roelofs (UnZip, unshrink decompression); Jean-loup Gailly (deflate compression);
Mark Adler (inflate decompression, fUnZip).
The author of the original unzip code upon which Info-ZIP's was based is Samuel H. Smith; Carl Mascott did the first Unix
port; and David P. Kirschbaum organized and led Info-ZIP in its early days with Keith Petersen hosting the original
mailing list at WSMR-SimTel20. The full list of contributors to UnZip has grown quite large; please refer to the CON‐
TRIBS file in the UnZip source distribution for a relatively complete version.
VERSIONS
v1.2 15 Mar 89 Samuel H. Smith
v2.0 9 Sep 89 Samuel H. Smith
v2.x fall 1989 many Usenet contributors
v3.0 1 May 90 Info-ZIP (DPK, consolidator)
v3.1 15 Aug 90 Info-ZIP (DPK, consolidator)
v4.0 1 Dec 90 Info-ZIP (GRR, maintainer)
v4.1 12 May 91 Info-ZIP
v4.2 20 Mar 92 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.0 21 Aug 92 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.01 15 Jan 93 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.1 7 Feb 94 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.11 2 Aug 94 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.12 28 Aug 94 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.2 30 Apr 96 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.3 22 Apr 97 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.31 31 May 97 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.32 3 Nov 97 Info-ZIP (Zip-Bugs subgroup, GRR)
v5.4 28 Nov 98 Info-ZIP (Zip-Bugs subgroup, SPC)
v5.41 16 Apr 00 Info-ZIP (Zip-Bugs subgroup, SPC)
v5.42 14 Jan 01 Info-ZIP (Zip-Bugs subgroup, SPC)
v5.5 17 Feb 02 Info-ZIP (Zip-Bugs subgroup, SPC)
v5.51 22 May 04 Info-ZIP (Zip-Bugs subgroup, SPC)
v5.52 28 Feb 05 Info-ZIP (Zip-Bugs subgroup, SPC)
v6.0 20 Apr 09 Info-ZIP (Zip-Bugs subgroup, SPC)
Info-ZIP 20 April 2009 (v6.0) UNZIP(1)
… _linux-beginner-gzip:
:ref:unzip<linux-beginner-unzip>
gzip
用于对后缀为gz
文件进行解压:
$ gzip -d data.gz
这个命令将解压examplefile.gz
,并且在当前目录下生成一个名为data
的解压后的文件。
但特别需要留意的是,这个操作会删除源文件,会删除源文件,会删除源文件。
所以如果你想保留原始压缩文件,一定记得使用-k
选项:
$ gzip -dk data.gz
这会保留原始的data.gz
文件,并生成一个解压后的data
文件。
参考 Linux reboot 命令。—
… note::
待从头、收拾旧山河,朝天阙。
宋代 岳飞《满江红·写怀》
head
命令用来查看文件头部的n行,如果没有指定的n,默认显示10行。
官方定义:
head - output the first part of files
$ head [option] [filename]
参数option比较常用的如下所示:
-c <数目>
显示的字节数-n <行数>
显示文件的头部 n 行内容假定文件 text.txt 有 20 行,从 1-20 ,默认情况下的使用如下,显示前面的10行:
$ head text.txt
1
2
3
4
5
6
7
8
9
10
显示 text.txt 文件的开头 5 行,可以输入以下命令:
$ head -n 5 text.txt
显示文件前 20 个字节:
$ head -c 20 text.txt
1
2
3
4
5
6
7
8
9
history
命令用于显示用户以前执行过的历史命令,并且能对历史命令进行追加和删除等操作。
如果你经常使用Linux命令,那么使用history
命令可以有效地提升你的效率。
语法比较简单:
$ history [OPTIONS] [..]
常用参数:
-a
将当前shell会话的历史命令追加到命令历史文件中,命令历史文件是保存历史命令的配置文件-c
清空当前历史命令列表-d
删除历史命令列表中指定序号的命令-n
从命令历史文件中读取本次Shell会话开始时没有读取的历史命令$ history
1 sudo apt get update
2 sudo apt update
3 sudo apt upgrade
4 sudo apt install vim
5 ls
6 pwd
7 cd
8 ls
9 sudo apt install vim
10 sudo apt search pgplot
11 bash go.sh
12 sudo apt install zsh
13 bash down.sh
14 exit
15 echo $PS1
16 bash
17 exit
18 sh test.sh
19 bash
20 exit
21 rsync -rv --progress user@192.168.1.123:~/data1/ .
22 rsync -rv --progress user@192.168.1.123:~/src/ .
......
history
后面跟上数字,就可以列出最近的几条命令:
$ history 3
8540 pwd
8541 echo $PATH
8542 git status
可以通过-d
参数来删除某一条或者某些历史命令,支持正则表达式
# 删除第35条历史命令
$ history -d 35
# 删除第31到39条历史命令
$ histor -d 3{1..9}
如果登陆某些调试机器,后面将不在使用,考虑到安全性,可以删除操作过的所有历史,通过-c
参数,即clear
的意思。
$ history -c
此时将没有任何历史操作。
$ history | awk 'BEGIN {FS="[ \t]+|\\|"} {print $3}' | sort | uniq -c | sort –nr
967 ls
507 cd
199 vim
199 python
165 cp
152 less
105 mv
95 rm
94 ll
90 echo
85 bash
72 cat
66 apt
59 pwd
51 mkdir
...
```---
# Linux 的 Hostname命令
正常情况下,系统启动的时候我们就会设置`hostname`。
不过大部分哥们在安装的时候估计不会特别在意。不过如果在管理或者登录的计算机比较多的情况下,设置主机名就是一件特别需要留意的事情了。
至少在设置了主机名后你就会知道`ssh`登录的是那台系统。
官方定义为:
> hostname - show or set the system's host name
`hostname`命令用于显示和设置系统的主机名称。环境变量**HOSTNAME** 或者 **HOST** 保存了当前的主机名。
使用方法为:
```bash
$ hostname [-b|--boot] [-F|--file filename] [hostname]
基本的使用为查看和修改。
系统如果正在运行,可以直接使用hostname
来临时更改主机名,在系统重启前,都会保证有效。
命令如下:
$ hostname NEW_HOSTNAME
注意这个命令话系统并不会永久保存新的主机名,重新启动机器之后还是原来的主机名。
这里分为两种情况,主要为基于两种不同的主流发行版,需要修改文件,需要管理员权限。
基于Debian系统的设置在文件/etc/hostname,系统启动时会读取该文件并调用初始化脚本/etc/init.d/hostname.sh
所以对于这类系统可以通过修改编辑文件/etc/hostname来更改。
/etc/init.d/hostname.sh start
修改完毕后,这个命令可以立即更改。
基于RedHat的系统使用文件 /etc/sysconfig/network
来设置。可以通过修改该文件并使用hostname命令来设置。
还有其他的方法,你知道吗?
hwinfo
又一个用于显示硬件信息的命令。
可以获得 Linux 系统的各种硬件组件(如CPU、内存、显卡、硬盘等)的详细信息。
sudo hwinfo
列出系统上几乎所有可用硬件的详细信息。
sudo hwinfo --cpu
sudo hwinfo --memory
sudo hwinfo --gfxcard
sudo hwinfo --disk
通过在命令后添加 --cpu
、--memory
、--gfxcard
、--disk
等参数,获取特定硬件的信息。
id
命令用于显示用户的以及其所属群组的ID。
官方定义为:
id - print real and effective user and group IDs
$ id [OPTION]... [USER]
参数说明:
-g, --group
:仅仅显示组的ID-G, --groups
:显示所有组的IDs-u, --user
:打印用户的ID显示当前用户信息
$ id
uid=1000(user) gid=1000(user) groups=1000(user),980(data),1006(monitor)
可以看到用户user的ID及组ID均为1000,该用户还属于data和monitor组。
$ id -g
1000
仅显示用户组的ID
Linux ifconfig
命令用于显示或设置网络设备,在调试或调优的时间经常使用。
官方定义为:
ifconfig - configure a network interface
对于这个命令,一般只要掌握如何查看,如何设置IP地址基本就可以了,对于网络钻的比较深的,还需要更多一些参数。
使用方法为:
# 显示
$ ifconfig [-v] [-a] [-s] [interface]
# 设置
$ ifconfig [-v] interface [aftype] options | address ...
一些参数的含义为:
-a
:显示所有网卡的状态,即使是down的状态-s
:显示一个短列表interface mtu N
设置最大传输单元【需要管理员权限】netmask addr
:设置掩码地址【需要管理员权限】interface up
激活网卡【需要管理员权限】interface down
关闭网卡【需要管理员权限】interface hw ether xx.xx.xx.xx.xx.xx
设置MAC地址【需要管理员权限】如果不指定任何参数,直接显示当前活动的接口,如下:
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.123 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
inet6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x0<global>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 5634431 bytes 4994127142 (4.6 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 858051 bytes 109858013 (104.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7320000-c733ffff
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.6.123 netmask 255.255.255.0 broadcast 192.168.6.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 1547215 bytes 92862867 (88.5 MiB)
RX errors 0 dropped 6 overruns 0 frame 0
TX packets 3230 bytes 922051 (900.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 219608 bytes 105943591 (101.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 219608 bytes 105943591 (101.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
其中一般希望看到的信息包括:
不加任何参数只会显示已经配置并且活跃的网卡信息,如果使用ifconfig -a
就可以显示全部的网卡状态了,即使有些网卡是down的状态。
亦或者指定一个interface,比如上面的eth1,则只输出这个网卡的信息,如下:
$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.6.123 netmask 255.255.255.0 broadcast 192.168.6.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 1547215 bytes 92862867 (88.5 MiB)
RX errors 0 dropped 6 overruns 0 frame 0
TX packets 3230 bytes 922051 (900.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
如果只想看到MTU以及数据包的状态,可以用该参数,如下:
$ ifconfig -s
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 5665450 0 0 0 867639 0 0 0 BMRU
eth1 1500 3489187217 0 101054 0 501260400 0 0 0 BMU
lo 65536 219708 0 0 0 219708 0 0 0 LRU
输出信息主要包含了MTU值,发送及接收的数据情况。
如下对eth0网卡配置IP地址、掩码以及广播地址,当然可以分布操作
# 给eth0配置IP地址
$ ifconfig eth0 192.168.1.123
# 给eth0配置IP地址和子网掩码
$ ifconfig eth0 192.168.1.123 netmask 255.255.255.0
# 给eth0配置IP地址、子网掩码还有广播地址
$ ifconfig eth0 192.168.1.123 netmask 255.255.255.0 broadcast 192.168.1.255
在某些情况下可能需要修改MTU值,比如增到到MTU为9000,如下:
$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.6.123 netmask 255.255.255.0 broadcast 192.168.6.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 1547215 bytes 92862867 (88.5 MiB)
RX errors 0 dropped 6 overruns 0 frame 0
TX packets 3230 bytes 922051 (900.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# 修改MTU
$ ifconfig eth1 MTU 9000
$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 192.168.6.123 netmask 255.255.255.0 broadcast 192.168.6.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 1547215 bytes 92862867 (88.5 MiB)
RX errors 0 dropped 6 overruns 0 frame 0
TX packets 3230 bytes 922051 (900.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
已经看到输出的信息已经把mtu更新为了9000.
这个值对网络传输影响很大。
启动关闭主要的应用场景为重新设置了IP地址,或者暂时对某个网卡进行操作。
# 关闭eth0
$ ifconfig eth0 down
# 启动eth0
$ ifconfig eth0 up
不过需要注意的是
ip
来搞定。等明天~。Linux ip
命令与 ifconfig
命令类似,但比 ifconfig
命令更加强大,主要用于显示或设置网络设备。
已经在Linux 2.2 加入到了内核。所以ip
是加强版的网络配置工具,用来替代ifconfig
并强化其他功能。
官方定义为:
ip - show / manipulate routing, devices, policy routing and tunnels
对于这个命令,命令集是相当的多。先说一些基础的,其他就要自己摸索了。
使用方法为:
$ ip [ OPTIONS ] OBJECT { COMMAND | help }
$ ip [ -force ] -batch filename
# OBJECT的取值
# OBJECT := { link | address | addrlabel | route | rule | neigh | ntable | tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm | netns | l2tp | tcp_metrics | token | macsec }
# OPTIONS的取值
# OPTIONS := { -V[ersion] | -h[uman-readable] | -s[tatistics] | -d[etails] | -r[esolve] | -iec | -f[amily] { inet | inet6 | ipx | dnet | link } | -4 | -6 | -I | -D | -B | -0 | -l[oops] { maximum-addr-flush-attempts } | -o[neline] | -rc[vbuf] [size] | -t[imestamp] | -ts[hort] | -n[etns] name | -a[ll] | -c[olor] }
COMMAND的值主要取决于OBJECT,可能有所不同,一般可以使用add
,delete
和show
(或者list
),均可以输入help
来进行查询。
OBJECT中常用的为:
link
网络设备address
设备上的协议地址-s, -stats, -statistics
统计化输出# 显示网络设备
$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
# 显示IP等更多信息
$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.1.123/24 brd 192.168.254.255 scope global noprefixroute eno1
valid_lft forever preferred_lft forever
inet6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/64 scope global noprefixroute
valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
命令中的show为默认,也可以直接使用ip link
或者ip address
,结果一致。
可以通过ip addr add/del xxx.xxx.xxx.xxx dev interface
来设置或者删除IP地址。
如下设置or删除eth0的IP地址。
# 设置IP地址
$ ip addr add 192.168.0.1/24 dev eth0
# 删除IP地址
$ ip addr del 192.168.0.1/24 dev eth0
与ifconfig类似,也使用up与down来进行启动和关闭,具体如下:
# 开启网卡
$ ip link set eth0 up
# 关闭网卡
$ ip link set eth0 down
选项-s可以统计一些信息方便我们阅读,如下看看网络的情况:
$ ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
RX: bytes packets errors dropped overrun mcast
871883256468 251700492 0 0 0 0
TX: bytes packets errors dropped carrier collsns
871883256468 251700492 0 0 0 0
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
64930085920632 50955323447 0 613156 0 472190933
TX: bytes packets errors dropped carrier collsns
17534345850354 17448077191 0 0 0 0
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
可以看到对输出进行了一些格式化,看起来更直观。
… note::
劝君莫惜金缕衣,劝君惜取少年时。
Linux join
命令用于将两个文件中指定栏位内容相同的行连接起来。
找出两个文件中,指定栏位内容相同的行,并加以合并,再输出到标准输出设备。
官方解释为:
join - join lines of two files on a common field
语法为:
$ join [OPTION]... FILE1 FILE2
?
这个命令的参数还是有一些的,不过基本默认的足够使用了。
最简单的连接两个文件。
首先看一下两个文件的内容,然后进行join
操作。
# 查看file1、file2 的文件内容:
$ cat file1
Zhangsan age 14
Lisi age 15
Wangwu age 16
$ cat file2
Zhangsan score 80
Lisi score 90
Wangwu score 85
# 使用join命令
$ join file1 file2
Zhangsan age 14 score 80
Lisi age 15 score 90
Wangwu age 16 score 85
# 交互两个文件的顺序
$ join file2 file1
Zhangsan score 80 age 14
Lisi score 90 age 15
Wangwu score 85 age 16
可以看到交换顺序对输出是由影响的,会影响到最终的输出内容。
而如果两个文件的内容不同,那么在进行join操作时会有警告信息输出,如下所示:
$ cat file1
Jialiu age 15
Zhangsan age 14
Lisi age 15
Wangwu age 16
$ cat file2
Zhangsan score 80
Lisi score 90
Wangwu score 85
Jialiu score 88
$ join file1 file2
join: file1:3: is not sorted: Lisi age 15
join: file2:2: is not sorted: Lisi score 90
Zhangsan age 14 score 80
Lisi age 15 score 90
Wangwu age 16 score 85
$ join file2 file1
join: file2:2: is not sorted: Lisi score 90
join: file1:3: is not sorted: Lisi age 15
Zhangsan score 80 age 14
Lisi score 90 age 15
Wangwu score 85 age 16
join [-i][-a<1或2>][-e<字符串>][-o<格式>][-t<字符>][-v<1或2>][-1<栏位>][-2<栏位>][--help][--version][文件1][文件2]
参数:
-a FILENUM
also print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2
-e EMPTY
replace missing input fields with EMPTY
-i, --ignore-case
ignore differences in case when comparing fields
-j FIELD
equivalent to ‘-1 FIELD -2 FIELD’
-o FORMAT
obey FORMAT while constructing output line
-t CHAR
use CHAR as input and output field separator
-v FILENUM
like -a FILENUM, but suppress joined output lines
-1 FIELD
join on this FIELD of file 1
-2 FIELD
join on this FIELD of file 2
–check-order
check that the input is correctly sorted, even if all input lines are pairable
–nocheck-order
do not check that the input is correctly sorted
–header
treat the first line in each file as field headers, print them without trying to pair them
-z, --zero-terminated
line delimiter is NUL, not newline
Unless -t CHAR is given, leading blanks separate fields and are ignored, else fields are separated by CHAR. Any FIELD is a field
number counted from 1. FORMAT is one or more comma or blank separated specifications, each being 'FILENUM.FIELD' or '0'. Default
FORMAT outputs the join field, the remaining fields from FILE1, the remaining fields from FILE2, all separated by CHAR. If FORMAT
is the keyword 'auto', then the first line of each file determines the number of fields output for each line.
Important: FILE1 and FILE2 must be sorted on the join fields. E.g., use "sort -k 1b,1" if 'join' has no options, or use "join -t
''" if 'sort' has no options. Note, comparisons honor the rules specified by 'LC_COLLATE'. If the input is not sorted and some
lines cannot be joined, a warning message will be given.
? comm(1), uniq(1)
… _linux-beginner-kill:
Linux kill 命令用于删除执行中的程序或工作。
官方含义为:
kill - send a signal to a process
kill命令可将指定的信号发送给相应的进程或工作。 kill命令默认使用的信号为15(SIGTERM),用于结束进程或工作。如果进程或工作忽略此信号,则可以使用信号9(SIGKILL),强制杀死进程或作业。程序或工作的编号可利用 ps
指令或 jobs
指令查看。
$ kill [option] <pid> [...]
参数说明:
使用 kill -l
命令列出所有可用信号。
$ kill -l
HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS
其中最常用的信号为:
杀死进程
$ kill 12345
强制杀死进程
$ kill -KILL 123456
# 或者
$ kill -9 123456
那么如何kill某个用户的所有进程呢,比如用户为user,可以通过下面的命令执行:
$ kill -9 $(ps -ef | grep user)
# 或者
$ kill -u user
… _linux-beginner-killall:
… note::
及时当勉励,岁月不待人。
陶渊明《杂诗·人生无根蒂》
在Linux系统中,有许多命令可用于进程管理和控制。
其中一个常用的命令是killall
,它允许用户通过进程名字来终止运行中的进程。
官方定义为:
killall – kill processes by name
killall
命令用于向操作系统发送信号以终止指定进程。与kill
命令不同,killall
根据进程名字而不是进程ID来选择要终止的进程。这对于同时终止多个同名进程非常有用。
超级管理员可以kill掉任何进程。
killall
命令的基本语法如下:
$ killall [选项] 进程名
可以使用以下选项对killall
命令进行调整:
-i
:交互式模式,要求用户确认终止每个进程。-e
:精确匹配进程名,不匹配进程名的任何子串。-s
:指定要发送的信号类型,如-s HUP
。-v
:显示详细的终止进程的输出。要终止单个进程,可以使用以下命令:
$ killall 进程名
比如:
$ killall firefox
这将终止所有名为firefox
的进程。
要同时终止多个同名进程,可以使用以下命令:
$ killall -r 进程名
示例:
$ killall -r chrome
这将终止所有以chrome
为名的进程,包括chrome
和chromium
等等。
使用-i
选项可以在终止每个进程之前要求用户确认。示例:
$ killall -i firefox
在执行此命令时,系统将逐个显示要终止的进程,并要求用户确认是否继续,这个对于不确定是否一定中止的优点用哟。
可以使用-s
选项来指定要发送的信号类型。示例:
$ killall -s HUP nginx
这将向所有名为nginx
的进程发送HUP
信号,以重新加载配置。
killall
命令是一个强大的进程管理工具,可帮助用户终止指定名称的进程。它简化了终止多个同名进程的操作,并提供了一些有用的选项,如交互式模式和指定信号类型。在日常的系统管理和故障排除中,killall
是一个重要的工具,
使用 kill -l
命令列出所有可用信号。
$ kill -l
HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS
其中最常用的信号为:
杀死进程
$ kill 12345
强制杀死进程
$ kill -KILL 123456
# 或者
$ kill -9 123456
那么如何kill某个用户的所有进程呢,比如用户为user,可以通过下面的命令执行:
$ kill -9 $(ps -ef | grep user)
# 或者
$ kill -u user
SYNOPSIS
killall [-delmsvqz] [-help] [-I] [-u user] [-t tty] [-c procname] [-SIGNAL] [procname …]
The options are as follows:
-d Be more verbose about what will be done, but do not send any signal. The total number of user processes and the real user ID is shown. A list of the processes that
will be sent the signal will be printed, or a message indicating that no matching processes have been found.
-e Use the effective user ID instead of the (default) real user ID for matching processes specified with the -u option.
-help Give a help on the command usage and exit.
-I Request confirmation before attempting to signal each process.
-l List the names of the available signals and exit, like in kill(1).
-m Match the argument procname as a (case sensitive) regular expression against the names of processes found. CAUTION! This is dangerous, a single dot will match any
process running under the real UID of the caller.
-v Be verbose about what will be done.
-s Same as -v, but do not send any signal.
-SIGNAL Send a different signal instead of the default TERM. The signal may be specified either as a name (with or without a leading “SIG”), or numerically.
-u user Limit potentially matching processes to those belonging to the specified user.
-t tty Limit potentially matching processes to those running on the specified tty.
-c procname Limit potentially matching processes to those matching the specified procname.
-q Suppress error message if no processes are matched.
-z Do not skip zombies. This should not have any effect except to print a few error messages if there are zombie processes that match the specified pattern.
ALL PROCESSES
Sending a signal to all processes with the given UID is already supported by kill(1). So use kill(1) for this job (e.g. “kill -TERM -1” or as root “echo kill -TERM -1 | su -m
”).
IMPLEMENTATION NOTES
This FreeBSD implementation of killall has completely different semantics as compared to the traditional UNIX System V behavior of killall. The latter will kill all processes that the
current user is able to kill, and is intended to be used by the system shutdown process only.
EXIT STATUS
The killall utility exits 0 if some processes have been found and signalled successfully. Otherwise, a status of 1 will be returned.
EXAMPLES
Send SIGTERM to all firefox processes:
killall firefox
Send SIGTERM to firefox processes belonging to USER:
killall -u ${USER} firefox
Stop all firefox processes:
killall -SIGSTOP firefox
Resume firefox processes:
killall -SIGCONT firefox
Show what would be done to firefox processes, but do not actually signal them:
killall -s firefox
Send SIGTERM to all processes matching provided pattern (like vim and vimdiff):
killall -m 'vim*'
DIAGNOSTICS
Diagnostic messages will only be printed if the -d flag is used.
SEE ALSO
kill(1), pkill(1), sysctl(3)
… note::
夕阳无限好,只是近黄昏。
李商隐《乐游原 / 登乐游原》
Linux last
命令用于显示用户最近的登录信息。
官方定义为:
last, lastb - show listing of last logged in users
通过读取/var/log/wtmp文件来获取这些信息。
$ last [-R] [-num] [ -n num ] [-adFiowx] [ -f file ] [ -t YYYYMMDDHHMMSS] [name...] [tty...]
参数:
-R
省略 hostname 的栏位
-n
展示前 num 个
username
展示 username 的登入讯息
tty
限制登入讯息包含终端机代号
$ last
username2 pts/17 192.168.100.123 Wed Mar 23 22:14 still logged in
username3 pts/20 localhost:11.0 Wed Mar 23 14:26 - 15:48 (01:21)
username4 pts/23 localhost:11.0 Wed Mar 23 14:26 - 15:48 (01:21)
username4 pts/4 192.168.100.125 Thu Jun 10 18:37 - 22:57 (04:20)
username5 pts/4 192.168.100.125 Thu Jun 10 18:21 - 18:21 (00:00)
username6 pts/9 192.168.100.126 Thu Jun 10 18:11 - 18:20 (00:09)
username7 pts/15 192.168.100.122 Thu Jun 10 18:04 - 23:44 (1+05:40)
username8 pts/14 192.168.100.121 Thu Jun 10 17:59 - 07:50 (13:50)
username9 pts/9 192.168.100.126 Thu Jun 10 17:59 - 18:03 (00:04)
wtmp begins Thu Jun 10 17:33:14 2013
$ last -3
username2 pts/17 192.168.100.123 Wed Mar 23 22:14 still logged in
username3 pts/20 localhost:11.0 Wed Mar 23 14:26 - 15:48 (01:21)
username4 pts/23 localhost:11.0 Wed Mar 23 14:26 - 15:48 (01:21)
wtmp begins Thu Jun 10 17:33:14 2013
$ last -3 -R
username2 pts/17 Wed Mar 23 22:14 still logged in
username3 pts/20 Wed Mar 23 14:26 - 15:48 (01:21)
username4 pts/23 Wed Mar 23 14:26 - 15:48 (01:21)
wtmp begins Thu Jun 10 17:33:14 2013
$ last -n 5 -a -i
username3 pts/17 Wed Mar 23 22:14 still logged in 192.168.100.123
username5 pts/20 Wed Mar 23 14:26 - 15:48 (01:21) 0.0.0.0
username6 pts/23 Wed Mar 23 14:26 - 15:48 (01:21) 0.0.0.0
username7 pts/19 Wed Mar 23 13:46 - 15:48 (02:01) 192.168.100.123
username8 pts/17 Wed Mar 23 13:18 - 15:47 (02:29) 192.168.100.123
wtmp begins Thu Jun 10 17:33:14 2013
在Linux
系统如果希望查阅文件,有三个命令,是在命令行里面,如果GUI界面,请自行绕过,选择太多了。
cat
入门级的more
文件内容一屏幕装不下的时候使用的less
可以简单地认为是more
的升级版 , 首推我首推less
命令的原因是该命令可以往回卷动浏览已经看过的部分,但是more
是不可以的。或者可以认为less
是查看模式下的vim
。
首先看看为什么用less
命令吧。
If the file is longer than the size of Terminal window then it will be not easy to read or view all the content of the file easily. But there is a tweak, you can use less with cat command. It will give user an ability to scroll forward and backward through the content of the files using PgUp and PgDn keys or Up and Down Arrow keys on the keyboard.
如题,在文件内容足够多的时候,屏幕足够不大的时候,就会出现上面描述的问题,这就出现了less
命令。
Linux
系统可以说把少就是多
这个哲学用到了极致,恰如小巧优美的C语言,不该有的功能坚决不给你提供,应该有的也不给你提供,哈哈,比如内存的管理,程序员就是神,你就是神。
less - opposite of more # 我觉得这是废话
我嘞个去,什么鬼?这是什么意思,我也知道少的反义词是多,大的反义词是小。
别急,那就看看more的含义吧,不会是 opposite of less
吧。OMG
more - file perusal filter for crt viewing
什么意思,淡定,听我说,在Linux
系统中有三种命令可以用来查阅全部的文件,分别是cat
、more
和less
命令,关于more
的解释主要针对在上古年代的计算机,你不理解crt
也没有关系,毕竟现在已经是Retina
的年代了。
一起看看下面的实例吧。
less [参数] 文件
与其他命令类似,直接跟上文件名即可。
接下来依旧使用/etc/services来进行示例。
这个是more命令比较好用的一个功能,可以显示目前浏览的百分比。
$ less -m /etc/services
auditd 48/udp # Digital Audit Daemon
la-maint 51/tcp # IMP Logical Address Maintenance
la-maint 51/udp # IMP Logical Address Maintenance
xns-time 52/tcp # XNS Time Protocol
xns-time 52/udp # XNS Time Protocol
xns-ch 54/tcp # XNS Clearinghouse
xns-ch 54/udp # XNS Clearinghouse
isi-gl 55/tcp # ISI Graphics Language
isi-gl 55/udp # ISI Graphics Language
xns-auth 56/tcp # XNS Authentication
xns-auth 56/udp # XNS Authentication
xns-mail 58/tcp # XNS Mail
xns-mail 58/udp # XNS Mail
ni-mail 61/tcp # NI MAIL
ni-mail 61/udp # NI MAIL
5%
此时可以在左下角看到,有个百分比。
使用-N可以实现cat中-n的效果,显示行号
$ less -N /etc/services
1 # /etc/services:
2 # $Id: services,v 1.55 2013/04/14 ovasik Exp $
3 #
4 # Network services, Internet style
5 # IANA services version: last updated 2013-04-10
6 #
7 # Note that it is presently the policy of IANA to assign a single well-known
8 # port number for both TCP and UDP; hence, most entries here have two entries
9 # even if the protocol doesn't support UDP operations.
10 # Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
11 # are included, only the more common ones.
12 #
13 # The latest IANA port assignments can be gotten from
14 # http://www.iana.org/assignments/port-numbers
15 # The Well Known Ports are those from 0 through 1023.
16 # The Registered Ports are those from 1024 through 49151
在less中,可以比较容易的搜索字符串,比如可以:
其实这些功能或者热键与vim相同。
在用less打开文件后,可以直接输入/number
来搜索nubmer这个字符串,回车后可以看到该字符串高亮显示,这个也是优于more的一点;同样?number
可以反向搜索number字符串。
可以通过-i选项来忽略搜索时的大小写
可以通过-b <缓冲区大小> 设置缓冲区的大小,这个一般用于文件很大、巨大、不是一般大的时候,此时你的内容可能不足以承载打开整个文件,比如4G的内存,而你却要打开10G的文件,此时可以通过该选项来设置,默认单位为KB,比如
$ less -b 1024 filename
即打开1024KB的文件缓冲
要编辑一个正在用less
浏览的文件,可以按下v
。你就可以用变量$EDITOR
所指定的编辑器来编辑了: 按下v键来编辑文件,退出编辑器后,你可以继续用less浏览了。
我比较喜欢less
的原因是对于该命令的很多操作都是与vim
相同,而我是一个重度vimer
,so 推荐less
。
说几个比较简单的移动:
j
向下移动k
向上移动g
移动到第一行G
移动到最后一行b
向后翻一页d
向后翻半页u
向前滚动半页y
向前滚动一行空格键
滚动一行回车键
滚动一页ln
命令是一个非常重要的命令,可以为某一个文件或目录在其他不同的位置建立一个同步的链接。部分功能与Windows的快捷方式类似。但更加强大。
官方解释为:
ln - make links between files
当我们需要在不同的目录,或者不同的工程,甚至是不同的人员需要用到同一个文件的时候,此时不需要每个位置都通过cp
来拷贝一份,因为在源文件更新的时候,这个文件是不会同步更新的 。而此时ln
命令就不一样了,通过该命令链接到源文件或目录,不仅可以不用占用重复的更多的磁盘空间,还可以同步更新。NICE。
$ ln [参数][源文件或目录][目标文件或目录]
其中参数的格式为
-f
,或 --force
: 强制执行,这个在链接已经存在的情况下必用-s
,或 --symbolic
:创建符号链接在Linux文件系统中,又有两种链接类型:
硬链接会复制一份相同大小的源文件,而软链接是一种特殊的文件,占用很小的磁盘空间。
默认情况下,不加任何参数,创建的是硬链接,如下,创建源文件a.log的硬链接a1.log:
$ ln a.log a1.log
$ ll
-rw-rw-r--. 3 user user 85710 Apr 5 21:29 a.log
-rw-rw-r--. 3 user user 85710 Apr 5 21:29 a1.log
这个时候修改源文件a.log的部分内容,可以看到硬链接也同步更新。
$ vim a.log
$ ll
-rw-rw-r--. 3 user user 85716 Apr 5 21:34 a.log
-rw-rw-r--. 3 user user 85716 Apr 5 21:34 a1.log
如果需要创建软链接,就需要参数-s
,如下,创建源文件a.log的软链接a1.log:
$ ln -s a.log a1.log
$ ll
-rw-rw-r--. 3 user user 85710 Apr 5 21:29 a.log
lrwxrwxrwx. 1 user user 5 Apr 5 21:30 a1.log -> a.log
这个时候修改源文件a.log的部分内容,可以看到软链接没有更新,不过其指向的内容依然更新了。
$ vim a.log
$ ll
-rw-rw-r--. 3 user user 85716 Apr 5 21:34 a.log
lrwxrwxrwx. 1 user user 5 Apr 5 21:30 a1.log -> a.log
此时可以看到,对于软链接a1.log而言,其仅为一个符号链接,用file
看一下:
$ file a1.log
a1.log: symbolic link to `a.log'
此时通过ln创建a.log的硬链接ah.log和软链接as.log,然后看一下如果删除源文件会发生什么情况。
# 创建软硬链接
$ ln a.log ah.log
$ ln -s a.log as.log
$ ll
-rw-rw-r--. 2 user user 85716 Apr 5 21:34 a.log
lrwxrwxrwx. 1 user user 5 Apr 5 21:30 as.log -> a.log
-rw-rw-r--. 2 user user 85716 Apr 5 21:34 ah.log
# 删除源文件
$ rm a.log
# 此时如果有颜色显示,as.log应该会是红色的警告色
$ ll
lrwxrwxrwx. 1 user user 5 Apr 5 21:30 as.log -> a.log
-rw-rw-r--. 2 user user 85716 Apr 5 21:34 ah.log
# 此时看一下as.log的状态
$ file as.log
as.log: broken symbolic link to `a.log'
可以看到如果删除了源文件,硬链接不受影响,但是软链接已经提示链接损坏了。
在软链接存在的情况下,如果再创建一个同名的,会报错,此时就需要强制创建了,加上-f
参数即可。
$ ln -s b.log as.log
ln: failed to create symbolic link 'as.log': File exists
# 强制创建
$ ln -sf b.log as.log
$ ll
-rw-rw-r--. 1 user user 85716 Apr 5 22:16 a.log
-rw-rw-r--. 2 user user 85716 Apr 5 21:34 ah.log
lrwxrwxrwx. 1 user user 5 Apr 5 22:21 as.log -> b.log
-rw-rw-r--. 1 user user 85716 Apr 5 22:17 b.log
… note::
众里寻他千百度,蓦然回首,那人却在灯火阑珊处
Linux locate
命令用于查找符合条件的文档、程序、目录等等。这个命令会在数据库中查找符合条件的各种信息。
一般情况我们只需要输入 locate name
即可查找。
官方定义为:
locate
- list files in databases that match a pattern
使用方法为:
$ locate [-d path | --database=path] [-e | -E | --[non-]existing] [-i | --ignore-case] [-0 | --null] [-c |
--count] [-w | --wholename] [-b | --basename] [-l N | --limit=N] [-S | --statistics] [-r | --regex ] [--regex‐
type R] [--max-database-age D] [-P | -H | --nofollow] [-L | --follow] [--version] [-A | --all] [-p | --print]
[--help] pattern...
看着很复杂,不过常用的参数倒是不多,基本为:
-n
: 至多显示 n个输出。-i, --ignore-case
: 忽略大小写默认情况下,locate
直接跟上需要查找的信息就可以了,如下所示:
$ locate set_vis.cpp
/home/user/mycode/src/set_vis.cpp
# 以查找apropos为例
$ locate apropos
/usr/bin/apropos
/usr/local/difmap/help/apropos.hlp
/usr/share/emacs/24.3/lisp/apropos.elc
/usr/share/man/de/man1/apropos.1.gz
/usr/share/man/es/man1/apropos.1.gz
/usr/share/man/fr/man1/apropos.1.gz
/usr/share/man/id/man1/apropos.1.gz
/usr/share/man/it/man1/apropos.1.gz
/usr/share/man/ja/man1/apropos.1.gz
/usr/share/man/man1/apropos.1.gz
/usr/share/man/nl/man1/apropos.1.gz
/usr/share/man/pl/man1/apropos.1.gz
/usr/share/man/ru/man1/apropos.1.gz
如果输出的信息很多,仅仅希望看到前面的几个,使用-n
参数既可
# 仅仅查看前的3个
$ locate -n 3 apropos
/usr/bin/apropos
/usr/local/difmap/help/apropos.hlp
/usr/share/emacs/24.3/lisp/apropos.elc
部分情况下,可能有大小写混淆的情况,此时使用-i
参数既可
$ $ locate -i set_vis.cpp
/home/user/mycode/src/set_vis.cpp
/home/user/mycode_CPP/src/set_VIS.cpp
不过刚按照的系统,这个命令并不一定有输出,主要是因为locate
与 find
不同, find
直接在硬盘找,而locate
只在数据库中查找。
这个数据库在CentOS系统默认的为 /var/lib/mlocate/mlocate.db 中,所以 locate
的查找会比较快,但并一定是实时的,而是以数据库的更新为准。
可以通过下面的命令手工升级数据库 ,命令为:
$ updatedb
然后就可以使用了。
… note::
寻寻觅觅,冷冷清清,凄凄惨惨戚戚。
宋 李清照《声声慢·寻寻觅觅》
如果linux
命令来个排名,ls
命令应该是最常用的命令,除非你像黄蓉的母亲,有过目不忘的本领,惹得黄药师抱憾终身。
ls
命令是list
的缩写,通过ls命令,我们可以查看目录的内容,确定各种重要文件和目录的属性。
ls [参数] [路径]
如果不加任何参数,默认列出当前目录的内容。
$ ls /etc/sysconfig/network-scripts
ifcfg-em1
ifcfg-em2
ifcfg-em3
ifcfg-em4
....
-l 就是使用long listing format长格式,来显示更多的内容信息。
$ ls -l /etc/sysconfig/network-scripts
total 264
-rw-r--r--. 1 root root 341 Nov 30 10:56 ifcfg-em1
-rw-r--r--. 1 root root 294 May 13 2016 ifcfg-em2
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em3
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em4
......
如果希望看到最近创建的文件,就需要用到-t参数了。
$ ls -lt /etc/sysconfig/network-scripts/
total 264
-rw-r--r--. 1 root root 341 Nov 30 10:56 ifcfg-em1
-rw-r--r--. 1 root root 294 May 13 2016 ifcfg-em2
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em4
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em3
...
如果希望删除很早以前的文件,看到最早创建的文件,就需要用到-r参数了。
$ ls -ltr /etc/sysconfig/network-scripts/
total 264
...
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em3
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em4
-rw-r--r--. 1 root root 294 May 13 2016 ifcfg-em2
-rw-r--r--. 1 root root 341 Nov 30 10:56 ifcfg-em1
$ ls -lS /etc/sysconfig/network-scripts/
total 264
...
-rw-r--r--. 1 root root 341 Nov 30 10:56 ifcfg-em1
-rw-r--r--. 1 root root 294 May 13 2016 ifcfg-em2
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em3
-rw-r--r--. 1 root root 272 May 10 2016 ifcfg-em4
lsblk
命令可以查看系统中的块设备信息
$ lsblk
这个命令会列出系统中所有的块设备(比如硬盘、分区和挂载点)的信息。
默认情况下,它会显示每个设备的名称、大小、类型、挂载点等信息。
如果需要显示更详细的信息,可以使用 -a
或 --all
选项:
$ lsblk -a
这会显示完整的块设备信息,包括未挂载的设备。
当然,还可以根据需求,定制化输出,不过单单这个命令,足矣。
Linux的CPU设备查看器。lscpu
命令用来显示cpu的相关信息。
lscpu
从sysfs和/proc/cpuinfo收集cpu体系结构信息,命令的输出比较易读 。
命令输出的信息包含cpu数量,线程,核数,socket和Nom-Uniform Memeor Access(NUMA),缓存等等。
官方定义为:
lscpu
- display information about the CPU architecture
参数基本用处不大,默认即可,部分参数可以查看offline和online的设备信息。
$ lscpu
Architecture: x86_64 #架构信息
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64 #逻辑cpu颗数
On-line CPU(s) list: 0-63
Thread(s) per core: 2 #每个核心线程
Core(s) per socket: 16 #每个cpu插槽核数/每颗物理cpu核数
Socket(s): 2 #cpu插槽数
NUMA node(s): 2
Vendor ID: GenuineIntel #cpu厂商ID
CPU family: 6 #cpu系列
Model: 63 #型号
Model name: Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
Stepping: 2 #步进
CPU MHz: 1290.335 #cpu主频
BogoMIPS: 4604.47
Virtualization: VT-x #cpu支持的虚拟化技术
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 40960K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
其中几个概念需要理解清楚,基本比较重要的都有了备注。
其中第一个为CPU(s),这个值为Socket * Core * Thread得出,也就是逻辑的CPU个数。
CPU(s): 64 #逻辑CPU数
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
socket: 2
而其他几个概念为:
lshw
是Hardware Lister
的缩写,直面意思即列出系统硬件信息。
可以显示关于计算机硬件组件(如处理器、内存、硬盘、网卡等)的详细信息,对于系统管理员和用户来说是一个非常有用的工具。
任何参数都不加的话,可用,信息极多,但是可用信息不多。
sudo lshw
这将输出系统中所有可用硬件的详细信息,包括硬件组件的制造商、型号、驱动程序等。
显示摘要信息:相对而言,这个反而好一些,简单的就是有用的
sudo lshw -short
这将显示硬件的摘要信息,包括设备名、类别、描述等。
显示指定类型的硬件信息:
sudo lshw -C network
上述示例将仅显示网络相关的硬件信息。
比如还可以查看memory
、cpu
、disk
等信息。
lshw
提供了全面的硬件信息,帮助用户了解系统配置和硬件组件的细节。在查看和诊断硬件问题或了解系统配置时,它是一个非常有用的工具。
lspci
命令用于显示PCI总线的信息,以及所有已连接的PCI设备信息。
官方定义为:
lspci
- list all PCI devices
默认情况下,lspci
会显示一个简短的设备列表。 使用使用一些参数来显示更详细的输出或供其他程序解析的输出。
不过需要注意的是,在许多操作系统上,对 PCI 配置空间的某些部分的访问仅限于 root,因此普通用户可用的 lspci 功能受到限制。
使用方法为:
$ lspci [options]
其中常用的三个选项为:
-n
以数字方式显示PCI厂商和设备代码-t
以树状结构显示PCI设备的层次关系-v
显示更详细的输出信息默认无参数的显示
$ lspci
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:04.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 (rev 02)
00:04.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 (rev 02)
00:04.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 (rev 02)
......
以数字形式显示
$ lspci -n
00:00.0 0600: 8086:2f00 (rev 02)
00:01.0 0604: 8086:2f02 (rev 02)
00:02.0 0604: 8086:2f04 (rev 02)
00:03.0 0604: 8086:2f08 (rev 02)
00:03.2 0604: 8086:2f0a (rev 02)
00:04.0 0880: 8086:2f20 (rev 02)
00:04.1 0880: 8086:2f21 (rev 02)
00:04.2 0880: 8086:2f22 (rev 02)
......
$ lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [8086:2f00] (rev 02)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f02] (rev 02)
00:02.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 [8086:2f04] (rev 02)
00:03.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f08] (rev 02)
00:03.2 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f0a] (rev 02)
00:04.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [8086:2f20] (rev 02)
00:04.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [8086:2f21] (rev 02)
00:04.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [8086:2f22] (rev 02)
......
$ lspci -t
lspci -t
-+-[0000:ff]-+-08.0
| +-08.2
| +-1f.0
| \-1f.2
+-[0000:80]-+-01.0-[81]----00.0
| +-04.0
| +-05.1
| +-05.2
| \-05.4
+-[0000:7f]-+-08.0
| +-08.2
| +-0c.1
\+-0c.2
常用参数:
-b | 以总线为中心的视图 |
-s | 仅显示指定总线插槽的设备和功能块信息 |
-i | 指定PCI编号列表文件,不使用默认文件 |
-m | 以机器可读方式显示PCI设备信息 |
如果您要报告 PCI 设备驱动程序或 lspci 中的错误
本身,请包括“lspci -vvx”甚至更好的“lspci”的输出
-vvxxx”(但是,请参阅下文了解可能的警告)。
OPTIONS
Basic display modes
-m Dump PCI device data in a backward-compatible machine readable
form. See below for details.
-mm Dump PCI device data in a machine readable form for easy parsing
by scripts. See below for details.
Display options
-vv Be very verbose and display more details. This level includes
everything deemed useful.
-vvv Be even more verbose and display everything we are able to
parse, even if it doesn't look interesting at all (e.g., unde‐
fined memory regions).
-k Show kernel drivers handling each device and also kernel modules
capable of handling it. Turned on by default when -v is given
in the normal mode of output. (Currently works only on Linux
with kernel 2.6 or newer.)
-x Show hexadecimal dump of the standard part of the configuration
space (the first 64 bytes or 128 bytes for CardBus bridges).
-xxx Show hexadecimal dump of the whole PCI configuration space. It
is available only to root as several PCI devices crash when you
try to read some parts of the config space (this behavior proba‐
bly doesn't violate the PCI standard, but it's at least very
stupid). However, such devices are rare, so you needn't worry
much.
-xxxx Show hexadecimal dump of the extended (4096-byte) PCI configura‐
tion space available on PCI-X 2.0 and PCI Express buses.
-b Bus-centric view. Show all IRQ numbers and addresses as seen by
the cards on the PCI bus instead of as seen by the kernel.
-D Always show PCI domain numbers. By default, lspci suppresses
them on machines which have only domain 0.
Options to control resolving ID's to names
-nn Show PCI vendor and device codes as both numbers and names.
-q Use DNS to query the central PCI ID database if a device is not
found in the local pci.ids file. If the DNS query succeeds, the
result is cached in ~/.pciids-cache and it is recognized in sub‐
sequent runs even if -q is not given any more. Please use this
switch inside automated scripts only with caution to avoid over‐
loading the database servers.
-qq Same as -q, but the local cache is reset.
-Q Query the central database even for entries which are recognized
locally. Use this if you suspect that the displayed entry is
wrong.
Options for selection of devices
-s [[[[<domain>]:]<bus>]:][<device>][.[<func>]]
Show only devices in the specified domain (in case your machine
has several host bridges, they can either share a common bus
number space or each of them can address a PCI domain of its
own; domains are numbered from 0 to ffff), bus (0 to ff), device
(0 to 1f) and function (0 to 7). Each component of the device
address can be omitted or set to "*", both meaning "any value".
All numbers are hexadecimal. E.g., "0:" means all devices on
bus 0, "0" means all functions of device 0 on any bus, "0.3"
selects third function of device 0 on all buses and ".4" shows
only the fourth function of each device.
-d [<vendor>]:[<device>][:<class>]
Show only devices with specified vendor, device and class ID.
The ID's are given in hexadecimal and may be omitted or given as
"*", both meaning "any value".
Other options
-i <file>
Use <file> as the PCI ID list instead of
/usr/share/hwdata/pci.ids.
-p <file>
Use <file> as the map of PCI ID's handled by kernel modules. By
default, lspci uses /lib/modules/kernel_version/modules.pcimap.
Applies only to Linux systems with recent enough module tools.
-M Invoke bus mapping mode which performs a thorough scan of all
PCI devices, including those behind misconfigured bridges, etc.
This option gives meaningful results only with a direct hardware
access mode, which usually requires root privileges. Please
note that the bus mapper only scans PCI domain 0.
--version
Shows lspci version. This option should be used stand-alone.
PCI access options
The PCI utilities use the PCI library to talk to PCI devices (see
pcilib(7) for details). You can use the following options to influence
its behavior:
-A <method>
The library supports a variety of methods to access the PCI
hardware. By default, it uses the first access method avail‐
able, but you can use this option to override this decision. See
-A help for a list of available methods and their descriptions.
-O <param>=<value>
The behavior of the library is controlled by several named
parameters. This option allows to set the value of any of the
parameters. Use -O help for a list of known parameters and their
default values.
-H1 Use direct hardware access via Intel configuration mechanism 1.
(This is a shorthand for -A intel-conf1.)
-H2 Use direct hardware access via Intel configuration mechanism 2.
(This is a shorthand for -A intel-conf2.)
-F <file>
Instead of accessing real hardware, read the list of devices and
values of their configuration registers from the given file pro‐
duced by an earlier run of lspci -x. This is very useful for
analysis of user-supplied bug reports, because you can display
the hardware configuration in any way you want without disturb‐
ing the user with requests for more dumps.
-G Increase debug level of the library.
MACHINE READABLE OUTPUT
If you intend to process the output of lspci automatically, please use
one of the machine-readable output formats (-m, -vm, -vmm) described in
this section. All other formats are likely to change between versions
of lspci.
All numbers are always printed in hexadecimal. If you want to process
numeric ID's instead of names, please add the -n switch.
Simple format (-m)
In the simple format, each device is described on a single line, which
is formatted as parameters suitable for passing to a shell script,
i.e., values separated by whitespaces, quoted and escaped if necessary.
Some of the arguments are positional: slot, class, vendor name, device
name, subsystem vendor name and subsystem name (the last two are empty
if the device has no subsystem); the remaining arguments are option-
like:
-rrev Revision number.
-pprogif
Programming interface.
The relative order of positional arguments and options is undefined.
New options can be added in future versions, but they will always have
a single argument not separated from the option by any spaces, so they
can be easily ignored if not recognized.
Verbose format (-vmm)
The verbose output is a sequence of records separated by blank lines.
Each record describes a single device by a sequence of lines, each line
containing a single `tag: value' pair. The tag and the value are sepa‐
rated by a single tab character. Neither the records nor the lines
within a record are in any particular order. Tags are case-sensitive.
The following tags are defined:
Slot The name of the slot where the device resides
([domain:]bus:device.function). This tag is always the first in
a record.
Class Name of the class.
Vendor Name of the vendor.
Device Name of the device.
SVendor
Name of the subsystem vendor (optional).
SDevice
Name of the subsystem (optional).
PhySlot
The physical slot where the device resides (optional, Linux
only).
Rev Revision number (optional).
ProgIf Programming interface (optional).
Driver Kernel driver currently handling the device (optional, Linux
only).
Module Kernel module reporting that it is capable of handling the
device (optional, Linux only).
NUMANode
NUMA node this device is connected to (optional, Linux only).
New tags can be added in future versions, so you should silently ignore
any tags you don't recognize.
Backward-compatible verbose format (-vm)
In this mode, lspci tries to be perfectly compatible with its old ver‐
sions. It's almost the same as the regular verbose format, but the
Device tag is used for both the slot and the device name, so it occurs
twice in a single record. Please avoid using this format in any new
code.
FILES
/usr/share/hwdata/pci.ids
A list of all known PCI ID's (vendors, devices, classes and sub‐
classes). Maintained at http://pciids.sourceforge.net/, use the
update-pciids utility to download the most recent version.
/usr/share/hwdata/pci.ids.gz
If lspci is compiled with support for compression, this file is
tried before pci.ids.
~/.pciids-cache
All ID's found in the DNS query mode are cached in this file.
首先,这man是什么意思?
最开始很多人认为是不知道这个什么意思,找man呀。
其实man是manual的缩写,也就是手册的意思。
man
命令提供了系统命令的详细帮助信息。
Linux提供了丰富的帮助手册,当你需要查看某个命令的参数时不必到处上网查找,只要man
一下即可。这个也是每个程序员必备的功能,在没有网络的情况下,man
能解决很多问题和疑惑。
看一下官方定义:
Man - format and display the on-line manual pages
如果要读懂并使用man
,首先需要了解man
命令输出的格式,下面的几个是比较常用且需要注意的:
同时也可以使用man man 查看man的使用方法。
章节 | 含义 |
---|---|
NAME | 命令名称及功能简要说明 |
SYNOPSIS | 用法说明,包括可用的选项 |
DESCRIPTION | 命令功能的详细说明,可能包括每一个选项的意义 |
OPTIONS | 每一选项的意义 |
EXAMPLES | 一些使用示例 |
比如输入man ls
后,跳出下面的内容:
LS(1) User Commands LS(1)
NAME
ls - list directory contents
SYNOPSIS
ls [OPTION]... [FILE]...
DESCRIPTION
List information about the FILEs (the current directory by default). Sort entries alphabetically if none of
-cftuvSUX nor --sort is specified.
Mandatory arguments to long options are mandatory for short options too.
-a, --all
do not ignore entries starting with .
-A, --almost-all
do not list implied . and ..
--author
with -l, print the author of each file
-b, --escape
print C-style escapes for nongraphic characters
--block-size=SIZE
scale sizes by SIZE before printing them; e.g., '--block-size=M' prints sizes in units of 1,048,576
bytes; see SIZE format below
-B, --ignore-backups
Manual page ls(1) line 1 (press h for help or q to quit)
此时可以通过空格键或者回车键来向后翻屏或者翻页,可以使用b或者k向前查看。
查看关键词时可以使用:
/关键词
向后查找 n
:下一个
?关键词
向前查找 N
:前一个
可以通过q
来退出。
ls后面还有一个(1),详细的解释可以参考《Linux 安装 man 帮助程序》
man
有个参数为-f
,就是whatis
的功能,比如:
$ man -f ls cd file cat more less
ls (1) - list directory contents
ls (1p) - list directory contents
cd (1) - bash built-in commands, see bash(1)
cd (1p) - change the working directory
cd (n) - Change working directory
file (1) - determine file type
file (1p) - determine file type
file (n) - Manipulate file names and attributes
cat (1) - concatenate files and print on the standard output
cat (1p) - concatenate and print files
more (1) - file perusal filter for crt viewing
more (1p) - display files on a page-by-page basis
less (1) - opposite of more
less (3pm) - perl pragma to request less of something
与whatis命令完全一致
man
有个参数为-k
,就是apropos
的功能,比如:
$ man -k who
at.allow (5) - determine who can submit jobs via at or batch
at.deny (5) - determine who can submit jobs via at or batch
btrfs-filesystem (8) - command group of btrfs that usually work on the whole filesystem
docker-trust-signer (1) - Manage entities who can sign Docker images
ipsec_newhostkey (8) - generate a new raw RSA authentication key for a host
ipsec_showhostkey (8) - show host's authentication key
w (1) - Show who is logged on and what they are doing.
who (1) - show who is logged on
who (1p) - display who is on the system
whoami (1) - print effective userid
与apropos命令完全一致
如果遇到一个不熟悉或者完全不知道的命令,此时可以通过下面的3个步骤来了解:
man -k command
查询所有类似帮助文件信息,或许有可能就能找到你需要的信息;man -f command
查询以command
开始的相关帮助信息列表man N command
通过直接定位N获得详细帮助信息mkdir 命令用来创建指定的名称的目录,看看官方定义:
make directories
所以mkdir是这两个单词的缩写。mkdir要求创建目录的用户在当前目录中具有_写权限_,并且指定的目录名不能是当前目录中已有的目录,参数-p可以指定建立多级目录,这个参数也是用的最多的。
mkdir [可选项] 目录
-m
, --mode=模式,设定权限<模式> (类似 chmod)-p
, --parents 可以是一个路径名称。此时若路径中的某些目录尚不存在,加上此选项后,系统将自动建立好那些尚不存在的目录,即一次可以建立多个目录$ mkdir hello
$ mkdir -p a/b/c/d/e/f/g
$ mkdir -m 777 test
下面的一个命令可以创建一个项目的目录结构,如下:
$ mkdir -vp project/{src/,include/,lib/,bin/,doc/{info,product},logs/{info,product},service/deploy/{info,product}}
mkdir: created directory ‘project’
mkdir: created directory ‘project/src/’
mkdir: created directory ‘project/include/’
mkdir: created directory ‘project/lib/’
mkdir: created directory ‘project/bin/’
mkdir: created directory ‘project/doc’
mkdir: created directory ‘project/doc/info’
mkdir: created directory ‘project/doc/product’
mkdir: created directory ‘project/logs’
mkdir: created directory ‘project/logs/info’
mkdir: created directory ‘project/logs/product’
mkdir: created directory ‘project/service’
mkdir: created directory ‘project/service/deploy’
mkdir: created directory ‘project/service/deploy/info’
mkdir: created directory ‘project/service/deploy/product’
$ tree project/
project
├── bin
├── doc
│ ├── info
│ └── product
├── include
├── lib
├── logs
│ ├── info
│ └── product
├── service
│ └── deploy
│ ├── info
│ └── product
└── src
14 directories, 0 files
more功能类似 cat ,cat命令是整个文件的内容从上到下显示在屏幕上。 more会以一页一页的显示方便使用者逐页阅读,而最基本的指令就是按空白键(space)就往下一页显示,按 b 键就会往回(back)一页显示,而且还有搜寻字串的功能 。more命令从前向后读取文件,因此在启动时就加载整个文件。
在查阅文件的时候,我们说过可以用cat命令,不过这个是入门级别的,但凡用了几天Linux的,基本不太会再使用cat,而是另外两个指令,more或者less。这次说一下more,more是在文件的内容一个屏幕装不小的时候使用的。而less是more的升级版本,稍后会介绍。
more
: 文件内容一屏幕装不下的时候使用的
看看为什么用less
命令吧。
more - file perusal filter for crt viewing
看不懂,什么是CRT,莫慌,关于more
的解释主要针对在上古年代的计算机,你不理解crt
也没有关系,毕竟现在已经是Retina
的年代了。
less的命令格式与cat一样,可以直接跟上文件名,如下:
less [参数] 文件
其中的参数如下所示:
一起看看下面的实例吧,这里以文件/etc/services
为例:
这个文件的开始信息如下:
# /etc/services:
# $Id: services,v 1.55 2013/04/14 ovasik Exp $
#
# Network services, Internet style
# IANA services version: last updated 2013-04-10
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, most entries here have two entries
# even if the protocol doesn't support UDP operations.
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
# are included, only the more common ones.
#
# The latest IANA port assignments can be gotten from
# http://www.iana.org/assignments/port-numbers
# The Well Known Ports are those from 0 through 1023.
# The Registered Ports are those from 1024 through 49151
# The Dynamic and/or Private Ports are those from 49152 through 65535
#
# Each line describes one service, and is of the form:
#
# service-name port/protocol [aliases ...] [# comment]
tcpmux 1/tcp # TCP port service multiplexer
tcpmux 1/udp # TCP port service multiplexer
rje 5/tcp # Remote Job Entry
rje 5/udp # Remote Job Entry
echo 7/tcp
接下来的命令从第10行开始显示:
$ more +10 /etc/service
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
# are included, only the more common ones.
#
# The latest IANA port assignments can be gotten from
# http://www.iana.org/assignments/port-numbers
# The Well Known Ports are those from 0 through 1023.
# The Registered Ports are those from 1024 through 49151
# The Dynamic and/or Private Ports are those from 49152 through 65535
#
# Each line describes one service, and is of the form:
#
# service-name port/protocol [aliases ...] [# comment]
tcpmux 1/tcp # TCP port service multiplexer
tcpmux 1/udp # TCP port service multiplexer
rje 5/tcp # Remote Job Entry
rje 5/udp # Remote Job Entry
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
systat 11/udp users
daytime 13/tcp
daytime 13/udp
qotd 17/tcp quote
qotd 17/udp quote
可以看到前面的10行是没有显示的。
这里的含义为定义输出的内容为10行,你的屏幕可能足够大,不过显示的内容只有n行,如下:只显示10行的内容,此时终端可能还会残留以前的内容:
$ more -10 /etc/services
# /etc/services:
# $Id: services,v 1.55 2013/04/14 ovasik Exp $
#
# Network services, Internet style
# IANA services version: last updated 2013-04-10
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, most entries here have two entries
# even if the protocol doesn't support UDP operations.
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
可以关注一下,此时每次显示的只有10行。
这个参数用于在文件中搜索字符串pattern,然后在该字符串的前两行之前开始显示。比如搜索number,会显示以下内容:
$ more +/number /etc/services
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, most entries here have two entries
# even if the protocol doesn't support UDP operations.
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
# are included, only the more common ones.
#
# The latest IANA port assignments can be gotten from
# http://www.iana.org/assignments/port-numbers
# The Well Known Ports are those from 0 through 1023.
# The Registered Ports are those from 1024 through 49151
# The Dynamic and/or Private Ports are those from 49152 through 65535
#
# Each line describes one service, and is of the form:
#
# service-name port/protocol [aliases ...] [# comment]
tcpmux 1/tcp # TCP port service multiplexer
tcpmux 1/udp # TCP port service multiplexer
rje 5/tcp # Remote Job Entry
rje 5/udp # Remote Job Entry
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
systat 11/udp users
daytime 13/tcp
......
可以留意,此时显示的内容,第三行即包含搜索的字符串。
除以上介绍的以外,还有比较容易理解的以下参数:
mv
命令用于移动文件或者重命名文件及文件夹。官方定义为:
mv : move files
mv的语法与cp等其他语法类似,如下:
$ mv [options] source dest
$ mv [options] source ... directory
几个比较常用的选项如下:
$ ls
a.txt
$ mv a.txt b.txt
$ ls
b.txt
直接移动或者叫做重命名,文件夹也类似的操作。
$ ls
a.txt a.txt~
$ cp -b ../a.txt a.txt
$ ls
a.txt a.txt~
可以看到此时多了一个备份文件
$ cp -u a b
此时的操作为,只有a比b更新或者b不存在的时候,才会进行更新,否则失败。
这个用法多用在:当横向对比两个文价夹有无重要更新的时候才会用到。
$ cp -i b.txt a.txt
cp:是否覆盖"a.txt"?
在文件存在的时候,-i选项会进行提示,此时需要输入y才能覆盖,而输入n就会取消这个操作。
在Linux系统中,网络是至关重要的部分,而netstat
命令是管理和监视网络连接的强大工具之一。
它提供了关于网络接口和路由表的详细信息,有助于了解网络连接状态、统计信息以及网络协议的使用情况。
也更方便我们对网络的管理、故障排除以及安全监控等等。
netstat
命令比较简单,通过简单的参数组合,可以获得各种网络相关的信息。
以下是一些常用的参数及其功能:
-a
:显示所有连接和监听端口。-t
:仅显示TCP连接。-u
:仅显示UDP连接。-n
:以数字形式显示地址和端口号。-p
:显示进程标识符和程序名称。-r
:显示路由表。-s
:显示统计信息。… note::
桃李春风一杯酒,江湖夜雨十年灯。
Linux nice
命令可以通过修改优先级来执行程序,如果单纯输入nice
,未指定程序,则会打印出目前的排程优先序,默认的数值为0,范围为最高优先级的 -20到 最低优先级的19。
所谓的优先序就是优先执行的概念,优先级越高,获得CPU的时间和顺序也会越提前。
官方定义为:
nice
- run a program with modified scheduling priority
使用方法如下:
$ nice [OPTION] [COMMAND [ARG]...]
参数的话,只有一个,如下:
-n, --adjustment=N
调整执行的优先序 (默认为 10)设置ls
的优先级,如下将设置ls
的优先级加10
$ nice ls
下面的就是把ls
命令的优先级加5
$ nice -n 5 ls
下面通过几个操作来看一下nice
的效果
$ vim &
$ nice vi
$ nice vim &
$ nice -n 5 vim &
# 查看进程状态 其中PRI即为优先级情况,可以看到几个进程是不同的。
$ ps -l
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 1000 8 7 0 80 0 - 6406 - tty1 00:00:02 bash
0 T 1000 251 8 0 75 42967291 - 15927 - tty1 00:00:00 vim
0 T 1000 319 8 0 65 42967281 - 15927 - tty1 00:00:00 vi
0 T 1000 374 8 0 65 42967281 - 15927 - tty1 00:00:00 vim
0 T 1000 415 8 2 70 42967286 - 15927 - tty1 00:00:00 vim
0 R 1000 456 8 0 80 0 - 4983 - tty1 00:00:00 ps
passwd
用于创建或者更新用户密码,是管理员必备的命令之一。
这个命令最终的实现是通过调用Linux-PAM 和Libuser API来实现的。
官方的定义为:
passwd - update user’s authentication tokens
使用的方法为:
$ passwd [-k] [-l] [-u [-f]] [-d] [-e] [-n mindays] [-x maxdays] [-w warndays] [-i inactivedays] [-S] [--stdin] [username]
其中很常用的options为:
-S, --status
:显示密码的状态信息-d, --delete
:删除用户密码,此时该用户将处于无密码状态不太常用的options为:
--stdin
:可以通过标准输入,亦可以为一个pipe-l, --lock
:锁定账号,不过也不是完全锁定,因为用户可以通过ssh key来继续访问-u, --unlock
:与上面的-l
选项相反,属于解锁用户-w, --warning DAYS
:口令到期前通知用户,具备password lifetime的才支持这个是最常用的用法,用于设置或者修改更新用户密码
$ sudo passwd user #设置用户user的密码
Enter new UNIX password: #输入新密码,输入的密码不显示
Retype new UNIX password: #再次输入确认密码
passwd: password updated successfully
# 此时设置成功
$ sudo passwd -d user
passwd: password expiry information changed.
此时用户处于无密码的状态,很类似最近说的,没有密码就是最安全的密码。
$ sudo passwd -S user
[sudo] password for oper:
user PS 2013-02-11 0 99999 7 -1 (Password set, SHA512 crypt.)
?
说到密码,有两个比较重要的原则:
鉴于用man pgrep
和man pkill
的时候出来的同一个释义,所以要一次说两个命令了。
Linux pgrep
和pkill
命令根据名称和其他属性来查找或发送处理的信号。
官方定义为:
pgrep
,pkill
- look up or signal processes based on name and other attributes
pgrep
将查找当前运行的进程中满足条件的并打印到stdout中。
语法如下所示:
$ pgrep [options] pattern
$ pkill [options] pattern
常用的参数为:
-u
选择仅匹配指定有效用户ID进程-I
列出进程名及进程ID-a
列出进程的详细命令行默认情况下,仅仅列出包含关键词的进程ID。
$ pgrep ssh
3073
3833
4475
5786
5955
11301
13654
...
而pkill
刚好相反,直接发送终止信号(默认为SIGTERM)给这些进程。
可以通过-u
来指定用户
$ pgrep ssh -u username
4475
22084
27695
...
仅仅看到ID是崩溃的,因为不知道具体的进程,可以通过-l
来查看进程名
$ pgrep ssh -l
3073 sshd
3833 ssh-agent
4475 ssh-agent
5786 ssh-agent
5955 sshd
...
或许知道的进程名,还不足以了解具体信息,此时-a
选项就爬上用场了。
$ pgrep ssh -a
3073 /usr/sbin/sshd -D
3833 /usr/bin/ssh-agent /etc/X11/xinit/Xclients
5955 sshd: /usr/sbin/sshd -D -f /assets/sshd_config -e [listener] 0 of 100-200 startups
...
在linux系统里面如果想判断网络的好坏,脑海中蹦出的第一个命令就是ping
了。
官方定义为:
ping - send ICMP ECHO_REQUEST to network hosts
ping
命令基本是最常用的网络命令,它可以用来测试与目标主机的连通性。
ping
使用ICMP传输协议,通过发送ICMP ECHO_REQUEST数据包到网络主机,并显示返回的相应情况,根据这些信息就可以判断目标主机是否可以访问,在发送的过程中还会有一个时间戳用来计算网络的状态。
不过有些服务器为了防止通过ping
探测到,可能会在防火墙或者内核参数中禁止ping
命令,这样的话,可能虽然目标主机可以访问,但是无法ping
通,所以并不能说ping
不通的网络就是不能访问的。
需要注意linux下的ping和windows下的ping稍有区别,linux下ping不会自动终止,需要按ctrl+c终止或者用参数-c指定要求完成的回应次数。
ping
的使用说实话挺复杂,挺多的,不过常用的这篇短文基本就够了。
详细如下:
# ALL
$ ping [-aAbBdDfhLnOqrRUvV46] [-c count] [-F flowlabel] [-i interval] [-I interface] [-l preload] [-m mark] [-M pmtudisc_option] [-N node‐info_option] [-w deadline] [-W timeout] [-p pattern] [-Q tos] [-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp option] [hop ...] destination
# 较常用的选项如下:
$ ping [-c count] [-i interval] destination
参数说明:
-c
<完成次数> 设置完成要求回应的次数。
-i interval
指定收发信息的间隔时间。
如果不加任何参数,查看是否ping
通
$ ping www.baidu.com
PING www.a.shifen.com (115.239.210.27) 56(84) bytes of data.
64 bytes from 115.239.210.27: icmp_seq=1 ttl=52 time=6.06 ms
64 bytes from 115.239.210.27: icmp_seq=2 ttl=52 time=5.56 ms
64 bytes from 115.239.210.27: icmp_seq=3 ttl=52 time=5.67 ms
64 bytes from 115.239.210.27: icmp_seq=4 ttl=52 time=5.82 ms
64 bytes from 115.239.210.27: icmp_seq=5 ttl=52 time=5.70 ms
64 bytes from 115.239.210.27: icmp_seq=6 ttl=52 time=5.79 ms
^C # 此处输入了Ctrl+C强制退出
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.152/0.159/0.172/0.017 ms
可以看到可以ping
通www.baidu.com,时延还算比较OK,几个毫秒量级。
这里看一下几个字段的含义,其中:
56(84) bytes of data:表示默认的数据包长度为56字节;
time=5.56ms:表示响应的时间,值越小,证明连接越快;
TTL=52:TTL是Time To Live的缩写,表示DNS记录在DNS服务器上存在的时间,是IP协议包的一个值,告诉路由器啥时候抛弃这个数据包,(大体上可以通过这个值来判断目标类型的操作系统。)
可以通过 参数-c
来发送指定数目的包后停止
$ ping www.baidu.com -c 5
PING www.a.shifen.com (115.239.211.112) 56(84) bytes of data.
64 bytes from 115.239.211.112: icmp_seq=1 ttl=52 time=6.03 ms
64 bytes from 115.239.211.112: icmp_seq=2 ttl=52 time=5.96 ms
64 bytes from 115.239.211.112: icmp_seq=3 ttl=52 time=5.79 ms
64 bytes from 115.239.211.112: icmp_seq=4 ttl=52 time=5.79 ms
64 bytes from 115.239.211.112: icmp_seq=5 ttl=52 time=6.21 ms
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 5.791/5.958/6.215/0.186 ms
此时将在发送5次数据包以后自动停止,在Linux里面,如果不加这个参数,是会一直发送运行的。
可以通过 参数 -i N
指定每个N秒发送一次信息,如下将每隔3秒发送一次ping
信息。
$ ping www.baidu.com -i 3
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=55 time=28.6 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=55 time=28.6 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=55 time=28.6 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=4 ttl=55 time=28.6 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=5 ttl=55 time=28.6 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=6 ttl=55 time=28.6 ms
^C
6 packets transmitted, 6 received, 0% packet loss, time 15041ms
rtt min/avg/max/mdev = 28.650/28.670/28.697/0.139 ms
如上,每隔3秒会发送一次,对于需要持续检测或者记录的可以考虑适当加大这个时间间隔。
注意,只有管理员可以设置小于0.2秒的时间间隔。所以这个数值可以是浮点数~
上面的几个例子是可以配合使用的,比如
$ ping www.baidu.com -c 4 -i 5
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=55 time=29.4 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=55 time=29.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=55 time=29.4 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=55 time=29.4 ms
5 packets transmitted, 5 received, 0% packet loss, time 20045ms
rtt min/avg/max/mdev = 29.396/29.428/29.461/0.110 ms
这个例子为:每个5秒查询一次,一共查询4次,然后退出。
参考 Linux pgrep 命令。—
参考 Linux reboot 命令。—
ps
命令是“process status”的缩写,类似于 windows 的任务管理器
ps
命令用于显示当前系统的进程状态。
通常搭配kill
指令随时中断、删除不必要的程序。
同时呢,ps
命令是非常强大的进程查看命令,可以确定有哪些进程正在运行和运行的状态、进程是否结束、进程有没有僵死、哪些进程占用了过多的资源等等,总之大部分【Windows】任务管理器的信息都是可以通过执行该命令得到的。
$ ps [参数]
常用参数
其中aux的输出信息如下所示:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
$ ps
PID TTY TIME CMD
44965 pts/0 00:00:00 bash
56519 pts/0 00:00:00 ps
什么参数都不跟的话,基本输出没啥用处。
通常情况下,最常用的为把所有进程显示出来:
$ ps -aux
$ ps -A
把所有进程显示出来,并输出到ps.txt文件:
$ ps -aux > ps.txt
大部分情况下,希望查找有问题的进程或者感兴趣的进程,使用管道如下:
$ ps -aux | grep ssh
root 1303 0.0 0.0 82468 1204 ? Ss Apr17 0:00 /usr/sbin/sshd
root 3260 0.0 0.0 52864 572 ? Ss Apr17 0:00 /usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash -c "env GNOME_SHELL_SESSION_MODE=classic gnome-session --session gnome-classic"
root 24188 0.0 0.0 112652 956 pts/0 S+ 11:39 0:00 grep --color=auto ssh
Linux pstree
命令是processes tree
的简称,用于将所有的进行以树状图进行显示。
可以说是结合了ps
和tree
两个命令。
官方定义为:
pstree
- display a tree of processes
使用方法为:
$ pstree [-a, --arguments] [-c, --compact-not] [-C, --color attr] [-g, --show-pgids] [-h, --highlight-all, -Hpid, --high‐light-pid pid] [-l, --long] [-n, --numeric-sort] [-N, --ns-sort ns] [-p, --show-pids] [-s, --show-parents] [-S, --ns-changes] [-t, --thread-names] [-T, --hide-threads] [-u, --uid-changes] [-Z, --security-context] [-A, --ascii, -G, --vt100, -U, --uni‐code] [pid, user]
参数比较多,也比较复杂。其中常用的选项为:
-a
显示整个命令的完整路径。-G
有时可以使用这个选项,输出好看一些。默认显示当前的进程:
$ pstree
systemd─┬─SIMU.EXE───STARTPMON
├─NetworkManager───2*[{NetworkManager}]
├─abrt-dbus───3*[{abrt-dbus}]
├─2*[abrt-watch-log]
├─abrtd
├─accounts-daemon───2*[{accounts-daemon}]
├─agetty
├─10*[at-spi-bus-laun─┬─dbus-daemon]
│ └─3*[{at-spi-bus-laun}]]
├─10*[at-spi2-registr───2*[{at-spi2-registr}]]
├─atd
├─auditd─┬─audispd─┬─sedispatch
│ │ └─{audispd}
│ └─{auditd}
├─avahi-daemon───avahi-daemon
├─boltd───2*[{boltd}]
├─chrome─┬─2*[cat]
│ ├─chrome───chrome─┬─chrome
│ │ └─5*[{chrome}]
│ ├─chrome───8*[{chrome}]
│ ├─chrome-sandbox───chrome─┬─chrome─┬─chrome───4*[{chrome}]
│ │ │ └─2*[chrome───12*[{chrome}]+
│ │ └─chrome-sandbox───nacl_helper
│ └─21*[{chrome}]
├─chronyd
├─colord───2*[{colord}]
├─crashpad_handle───2*[{crashpad_handle}]
├─crond
├─cupsd
├─11*[dbus-daemon]
├─10*[dbus-launch]
├─10*[dconf-service───2*[{dconf-service}]]
├─dnsmasq───dnsmasq
...
使用-a参数可以看到各个进程的详细信息
$ pstree -a
...
|-at-spi-bus-laun
| |-dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
| `-3*[{at-spi-bus-laun}]
|-at-spi-bus-laun
| |-dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
| `-3*[{at-spi-bus-laun}]
...
pwd
命令的作用是查看当前目录,没有参数,输入后回车即可显示当前绝对路径。
官方定义为:
pwd - print name of current/working directory
所以pwd
是Print Working Directory第一个字的缩写。
唯二需要了解的参数如下:
-L
, --logical
:打印逻辑路径,与pwd
一致-P
, --physical
:打印物理路径,这里可以从超级链接直达原处此时比如我们进入一个目录,然后在打印出来,如下:
$ cd /etc/sysconfig/network-scripts/
$ pwd
/etc/sysconfig/network-scripts
可以看到pwd将输出完全路径
比如如下:
$ pwd
/opt/test
$ ls -l
总用量 1
lrwxrwxrwx 1 root root 14 Jan 15 2012 dir -> source/dir
drwxrwxrwx 1 root root 14 Jan 15 2012 source
可以看到此时的路径在/opt/test/里面有两个目录source和dir,其中dir链接到source里面的dir。
接下来对比一下-L和-P的区别。
$ cd dir
$ pwd
/opt/test/dir
$ pwd -L
/opt/test/dir
$ pwd -P
/opt/test/source/dir
从上面的输出可以发现,-P
参数会显示文件最原始的路径;而-L
则是逻辑上的路径。
… note::
世事漫随流水,算来一梦浮生。
Linux halt
, poweroff
, reboot
用来挂起、关机或者重启机器,成功后返回0。
这不是一个命令,这是三个命令,只不过三个命令的参数都是一致的。
官方定义为:
halt, poweroff, reboot
- Halt, power-off or reboot the machine
其实这三个命令都可以通过shutdown
来执行,并且相对而言shutdown
的参数还更多一些。
使用方法如下:
$ halt [OPTIONS...]
$ poweroff [OPTIONS...]
$ reboot [OPTIONS...]
参数如下所示:
--halt
将机器挂起,三个命令均相同-p, --poweroff
将机器关机,三个命令均相同--reboot
将机器重启,三个命令均相同-f, --force
立即执行挂起、关机和重启,一般对于force而言,除非万不得已,否则进来莫用-n, --no-sync
在挂起、关机或重启前不对硬盘进行同步,这个很危险呀,进来不要用呀--no-wall
在挂起、关机或重启前不发送警告信息,对于多用户不友好接下来的三个命令一致,都是将电脑关机,不过这个用法总归感觉怪怪的,所以还是分开各司其职比较好。比如关机还是poweroff
,重启还是reboot
吧。
$ halt --poweroff
$ poweroff --poweroff
$ reboot --poweroff
?
rm
命令用于删除文件或者目录。官方定义为:
remove files or directories
$ rm [options] name...
参数:
-i
删除前逐一询问确认,确认时比较好用。-f
即使原档案属性设为唯读,也直接删除,无需逐一确认,是force的意思。-r
将目录及里面的子文件逐一删除。删除文件可以直接使用rm
命令,若删除目录则必须配合选项"-r",例如:
$ rm a.txt
rm:是否删除 一般文件 "test.txt"? y
$ rm test
rm: 无法删除目录"test": 是一个目录
$ rm -r test
rm:是否删除 目录 "test"? y
删除当前目录下的所有文件及目录,命令行为:
$ rm -r *
文件一旦通过rm命令删除,则无法恢复,所以必须格外一定切记小心地使用该命令。
因为发生过很多欲哭无泪的故事。。。
rmdir
相对于 mkdir
,rmdir
是用来将一个“空的“目录删掉。
如果一个目录下面没有任何文件或文件夹,你就可以用 rmdir
指令将其除去。
而如果一个目录底下有其他的内容, rmdir
将无法将这个目录杀掉,除非使用那么就加上**-r** 选项就ok了。
这个命令比较鸡肋,基本都可以通过rm来搞定解决。
官方定义为:
remove directory
使用方法很简单,基本如下:
$ rmdir directory
如果说出彩的话,只有-p
选项还可以说道说道。
详细的解释为:
Each directory argument is treated as a pathname of which all components will be removed, if they are empty, starting with the last most component.
如果在删除子目录以后,主目录也是空目录的话,则一并删除之。
用法如下:
$ rmdir -p directory
其他的还是使用rm
吧。
rsync
也是远程(或本地)复制和同步文件最常用的命令。
与scp
类似。
官方定义为:
rsync - a fast, versatile, remote (and local) file-copyint tool
从定义看,比scp
要强一些。
借助rsync
命令,可以跨目录,跨磁盘和跨网络远程与本地数据进行复制和同步。比如最常用的就是在两台Linux主机之间进行数据的备份。
语法相对而言,比较简单,不过用法其实挺多的。
$ rsync [OPTION...] SRC... [DEST]
常用的参数为:
-v
: 详细模式输出-r
: 递归拷贝数据,但是传输数据时不保留时间戳和权限-a
: 归档模式, 归档模式总是递归拷贝,而且保留符号链接、权限、属主、属组时间戳-z
: 压缩传输-h
: human-readable--progress
: 显示传输过程--exclude=PATTERN
指定排除传输的文件模式--include=PATTERN
指定需要传输的文件模式默认情况下,传输一个文件不需要任何参数:
$ rsync user@192.168.100.123:~/dest_file dir/
命令执行后,会提示输入远程机器的密码,不过成功后不会显示任何信息,需要自行确认。
所以默认情况下,会使用rv参数,不仅可以传输一个目录,也可以是文件:
$ rsync -rv user@192.168.100.123:~/dest_file dir/
user@192.168.100.123's password:
receiving file list ... done
a
b
c
对于小文件而言,没有问题,但是如果文件比较大,比如有几个GB,那么此时--progress
参数就会比较有帮助:
$ rsync -rv --progress user@192.168.100.123:~/dest_file dir/
user@192.168.100.123's password:
receiving file list ... done
30 files to consider
a 100% 278.25MB/s 0:02:00 (xfer#1, to-check=28/30)
b 100% 289.25MB/s 0:02:00 (xfer#1, to-check=27/30)
c 100% 277.45MB/s 0:02:00 (xfer#1, to-check=26/30)
会实时更新传输的进度。
此时对比scp,可以看到多了一些提示信息,比如会提示:
receiving file list … done
30 files to consider
另外,在实时更新的进度里面也有了一些多出来的信息。
比如做软件开发,不希望传输一些编译过程中产生的.o
文件,测试的--exclude
参数就很完美,如下:
$ rsync -rv --progress --exclude "*.o" user@192.168.100.123:~/dest_file dir/
user@192.168.100.123's password:
receiving file list ... done
25 files to consider
a 100% 278.25MB/s 0:02:00 (xfer#1, to-check=28/30)
b 100% 289.25MB/s 0:02:00 (xfer#1, to-check=27/30)
c 100% 277.45MB/s 0:02:00 (xfer#1, to-check=26/30)
此时就看到,本来该传输30组的数据,去掉了部分.o
文件。
linux scp
命令主要用于远程复制传输文件。
官方定义为:
scp — secure copy (remote file copy program)
是安全拷贝的缩写,主要是因为scp
使用了ssh
的安全机制。
scp
应该是接触Linux的第一个用于在2台以上的服务器上做数据传输的不二命选,当然,ftp
除外了。
语法看着挺复杂:
$ scp [-12346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file] [-l limit] [-o ssh_option] [-P port] [-S program] [[user@]host1:]file1 ... [[user@]host2:]file2
其实简化下来就是:
$ scp [options] file_source file_target
差不多有20个参数,不过常用的有如下几个:
默认情况下,传输一个文件不需要任何参数:
$ scp src_file user@192.168.100.123:~/dest_file
user@192.168.100.123's password:
src_file 100% 44 20.0KB/s 00:00
命令执行后,会提示输入远程机器的密码,后面会显示传输成功的文件。
而传输一个目录不加任何参数的话,会报错如下:
$ scp src_dir user@192.168.100.123:~/dest_dir
user@192.168.100.123's password:
src_dir: not a regular file
提示要传输的不是常规的文件,需要加上参数-r
递归传输如下:
$ scp src_dir user@192.168.100.123:~/dest_dir
user@192.168.100.123's password:
a 100% 75 35.1KB/s 00:00
b 100% 48KB 14.7MB/s 00:00
c 100% 581 326.4KB/s 00:00
d 100% 48KB 15.3MB/s 00:00
e 100% 278MB 130.7MB/s 00:02
对于小文件而言,没有问题,但是如果文件比较大,比如有几个GB,那么此时-v
参数就会比较有帮助:
$ scp a user@192.168.100.123:~/b
user@192.168.100.123's password:
a 0% 0 0.0KMB/s --:-- ETA
a 30% 110MB 130.7MB/s 00:02
a 100% 278MB 130.7MB/s 00:02
会实时更新传输的进度。
加上-p
参数就会保留文件的修改时间,访问时间和访问权限:
$ scp -p a user@192.168.100.123:~/b
这个对于有些对时间很有控制欲的人很有帮助。
所以最常用的用法是(文件和文件夹均适用):
$ scp -rvp filename/directory user@192.168.100.123:~/
When you copy a DOS file to Unix, you could find \r\n in the end of each line. This example converts the DOS file format to Unix file format using sed command.
$sed 's/.$//' filename
Print file content in reverse order
$ sed -n '1!G;h;$p' thegeekstuff.txt
Add line number for all non-empty-lines in a file
$ sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /'
More sed examples: Advanced Sed Substitution Examples
le
$ sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /'
More sed examples: Advanced Sed Substitution Examples
… note::
青山依旧在,几度夕阳红。
Linux shutdown
命令可以用来挂起、关机或者重启设备,执行成功的话会返回0。
官方定义为:
shutdown
- Halt, power-off or reboot the machine
使用方法如下:
$ shutdown [OPTIONS...] [TIME] [WALL...]
第一个参数可能是一个时间字符串 (通常是 now
),一般而言,后面可以跟上一个提示消息来通知登陆的用户系统马上进行的操作。
时间字符串的一般为hh:mm
,表示经过多长时间进行关机,当然比较常用的为+m
,也就是m
分钟后执行操作。
参数如下所示:
-H, --halt
挂起机器-P, --poweroff
关机(默认选项)-r, --reboot
重启机器-h
等效于–poweroff,出发专门指定 --halt选项-k
不进行挂起、关机或者重启,仅仅发送通知信息--no-wall
: 挂起、关机或者重启前,不发送信息-c
取消当前正在进行的关机动作,前提是参数不是now$ shutdown -h now
$ shutdown -h 10
# 增加提示信息
$ shutdown -h 10 "The system will shutdown in 10 minutes, save your work immediately"
$ shutdown -h 17:30
# 增加提示信息
$ shutdown -h 10 "The system will shutdown in 17:30 , remember to save your work"
$ shutdown -r now
…note::
江山代有才人出,各领风骚数百年。
赵翼《论诗五首·其二》
Linux skill
命令送个讯号给正在执行的程序,预设的讯息为 TERM (中断),较常使用的讯息为 HUP、INT、KILL、STOP、CONT 和 0。
讯息有三种写法:分别为 -9、-SIGKILL、-KILL,可以使用 -l 或 -L 已列出可使用的讯息。
官方含义为:
skill, snice - send a signal or report process status
$ skill [signal] [options] expression
$ snice [new priority] [options] expression
-i
, --interactive
:交互模式,每个动作将要被确认-l
, --list
: 列出所有的信号-L
, --table
: 列出所有的信号名$ skill -l
HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT
CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS
$ skill -L
1 HUP 2 INT 3 QUIT 4 ILL 5 TRAP 6 ABRT 7 BUS
8 FPE 9 KILL 10 USR1 11 SEGV 12 USR2 13 PIPE 14 ALRM
15 TERM 16 STKFLT 17 CHLD 18 CONT 19 STOP 20 TSTP 21 TTIN
22 TTOU 23 URG 24 XCPU 25 XFSZ 26 VTALRM 27 PROF 28 WINCH
29 POLL 30 PWR 31 SYS
$ skill -KILL -t /dev/pts/*
$ skill -STOP -u user1 -u user2 -u user3
kill<linux-beginner-kill>
killall<linux-beginner-killall>
nice<linux-beginner-nice>
pkill<linux-beginner-pkill>
renice<linux-beginner-renice>
signal<linux-beginner-signal>
OPTIONS
PROCESS SELECTION OPTIONS
Selection criteria can be: terminal, user, pid, command. The options below may be used to ensure correct interpretation.
-t, --tty tty
The next expression is a terminal (tty or pty).
-u, --user user
The next expression is a username.
-p, --pid pid
The next expression is a process ID number.
-c, --command command
The next expression is a command name.
--ns pid
Match the processes that belong to the same namespace as pid.
--nslist ns,...
list which namespaces will be considered for the --ns option. Available namespaces: ipc, mnt, net, pid, user,
uts.
SIGNALS
The behavior of signals is explained in signal(7) manual page.
EXAMPLES
snice -c seti -c crack +7
Slow down seti and crack commands.
… note::
莫听穿林打叶声,何妨吟啸且徐行。
苏轼
Linux sleep
命令可以用来将目前动作延迟一段时间。
sleep
的官方定义为:
sleep - delay for a specified amount of time
或许你觉得计算机太累,让它稍事休息,亦或许过个个把钟头需要喝杯水,此时sleep
就有点小作用了。
其用法如下:
$ sleep [--help] [--version] number[smhd]
除了帮助和版本信息,基本没有参数了。
其中的number是必须的,也就是sleep多久的数字,默认为s
秒。其他的几个含义为:
s
second 秒m
minute分钟h
hour 小时d
day 天工作太累了,学习太累了,躺着太累了,休息5分钟
$ sleep 5m
$ sleep 1h
当然,sleep
也是支持时分秒搭配使用的,如下所示:
$ sleep 1h 2m 3s
将会sleep
1个小时2分钟3秒。
当然也可以做个循环计时器,通过sleep 1
$ echo "five" && sleep 1 && echo "four" && sleep 1 && sleep 1 && echo "three" && sleep 1 && echo "two" && sleep 1 && echo "one" && echo "Stop"
sleep
在程序里面使用比较频繁,特别是单片机的走马灯等。而Linux的sleep
,也是比较常与bash脚本来配合使用,如下:
#!/bin/bash
echo -e "start to sleep 15 seconds......"
sleep 15
echo -e "continue to run program......"
./program
Linux sort
命令用于将文本内容进行排序。
官方定义为:
sort
- sort lines of text files
$ sort [OPTION]... [FILE]...
$ sort [OPTION]... --files0-from=F
常用的参数为:
-c
检查文件是否已经按照顺序排序。-u
意味着是唯一的(unique),输出的结果是去完重了的。-r
以相反的顺序来排序。-k field1[,field2]
按指定的列进行排序。这里假定测试文件名为testfile:
LiSi 80
ZhangSan 70
WangWu 90
MaLiu 88
在使用 sort
命令以默认的式对文件的行进行排序,命令如下:
$ sort testfile
LiSi 80
MaLiu 88
WangWu 90
ZhangSan 70
sort
命令默认情况下将第一列以 ASCII 码的次序排列,并将结果输出到标准输出。
对于测试文件而言,或许我们更希望使用数字来统计排序,此时可以使用-k N
参数,其中N为列数
$ sort testfile -k 2
ZhangSan 70
LiSi 80
MaLiu 88
WangWu 90
在某些情况下,或许只想看看文件是否已经排序,使用-c
参数 :
$ sort -c testfile
sort: testfile:2: disorder
如果没有排序会有输出,而排序的话就没有输出。
如果希望看一下数字从高到低的培训,使用-r
参数:
$ sort testfile -k 2 -r
WangWu 90
MaLiu 88
LiSi 80
ZhangSan 70
Sort a file in ascending order
$ sort names.txt
Sort a file in descending order
$ sort -r names.txt
Sort passwd file by 3rd field.
$ sort -t: -k 3n /etc/passwd | more
… note::
去年今日此门中,人面桃花相映红。
崔护《题都城南庄》
在 shell 中执行程序时,shell 会提供一组环境变量。source
命令是shell的内建指定,用的最多的还是配置参数的读取和设置。
source命令的功能是用于从指定文件中读取和执行命令,通常用于被修改过的文件,使之新参数能够立即生效,而不必重启整台服务器。
较常与export
等结合使用。export
可以新增,修改或删除环境变量,供后续执行的程序使用。不过export
在终端退出后就失效了。
如果需要一直有效,可以考虑写入配置文件。
Linux export 命令用于设置或显示环境变量。比如如下所示:
$ export MYNAME='HELLOWORLD'
$ echo $MYNAME
HELLOWORLD
不过在终端退出后,这个变量定义就不复存在了。
source的用法一般如下所示:
$ source filename
比如最常用的:
$ source ~/.bash_profile
而对于第一个的设置,可以考虑将export MYNAME='HELLOWORLD'
写入文件~/.bash_profile,这样每次登陆或者打开终端的时候都会自动载入了。
Linux split
命令用于将一个文件切分开,一般用于将大文件切分为多个小文件,方便数据传输、保持和校验等。
默认情况下将按照每1000行切割成一个小文件。
官方定义为:
split
- split a file into pieces
使用方法为:
$ split [OPTION]... [INPUT [PREFIX]]
常用的参数为:
-b, --bytes=SIZE
: 指定每多少字节切成一个小文件默认情况下,split
会将原来的大文件aa 切割成多个以x开头的小文件,可以看到其实为xaa,xab,一致到xaz,递增为xba以此类推。
$ split aa
$ ls
-rw-rw-r-- 1 user user 611037792 Jan 15 22:09 aa
-rw-rw-r-- 1 user user 356533 Jan 15 22:10 xaa
-rw-rw-r-- 1 user user 377414 Jan 15 22:10 xab
-rw-rw-r-- 1 user user 346342 Jan 15 22:10 xac
-rw-rw-r-- 1 user user 358728 Jan 15 22:10 xad
-rw-rw-r-- 1 user user 391466 Jan 15 22:10 xae
-rw-rw-r-- 1 user user 368786 Jan 15 22:10 xaf
-rw-rw-r-- 1 user user 377274 Jan 15 22:10 xag
-rw-rw-r-- 1 user user 393500 Jan 15 22:10 xah
-rw-rw-r-- 1 user user 362512 Jan 15 22:10 xai
-rw-rw-r-- 1 user user 365170 Jan 15 22:10 xaj
-rw-rw-r-- 1 user user 362878 Jan 15 22:10 xak
-rw-rw-r-- 1 user user 387394 Jan 15 22:10 xal
-rw-rw-r-- 1 user user 355614 Jan 15 22:10 xam
-rw-rw-r-- 1 user user 366420 Jan 15 22:10 xan
-rw-rw-r-- 1 user user 368912 Jan 15 22:10 xao
-rw-rw-r-- 1 user user 350226 Jan 15 22:10 xap
-rw-rw-r-- 1 user user 386102 Jan 15 22:10 xaq
-rw-rw-r-- 1 user user 377292 Jan 15 22:10 xar
-rw-rw-r-- 1 user user 376416 Jan 15 22:10 xas
-rw-rw-r-- 1 user user 347584 Jan 15 22:10 xat
-rw-rw-r-- 1 user user 376586 Jan 15 22:10 xau
-rw-rw-r-- 1 user user 352778 Jan 15 22:10 xav
-rw-rw-r-- 1 user user 380608 Jan 15 22:10 xaw
-rw-rw-r-- 1 user user 356634 Jan 15 22:10 xax
-rw-rw-r-- 1 user user 377414 Jan 15 22:10 xay
-rw-rw-r-- 1 user user 346342 Jan 15 22:10 xaz
可以使用-b
参数,切分为准确字节的文件,如下:
$ split aa -b 1024000
$ ll
-rw-rw-r-- 1 user user 611037792 Jan 15 22:09 aa
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xaa
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xab
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xac
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xad
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xae
-rw-rw-r-- 1 user user 1024000 Jan 15 22:15 xaf
这个参数直接跟在输入的文件后面即可,如下:
$ split aa DAT
$ ls
aa
DATaa
DATab
DATac
DATad
DATae
DATaf
```---
# ssh - 远程登陆
`ssh`命令是**openssh**套件中的客户端连接工具,使用加密协议实现安全的远程登录服务器,实现对服务器的远程管理。
官方定义为:
> ssh — OpenSSH remote login client
使用方法为:
```bash
$ ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port]
[-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-J destination] [-L address]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address]
[-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] destination [command]
看着很复杂,确实也很复杂。
不过常用的参数倒是不多,基本为:
-l login_name
指定连接远程服务器的登录用户名-p port
指定远程服务器上的端口默认情况下,ssh
直接跟上IP就可以,不过此时的登陆账户为本机的账户名,可以通过whoami
得到,所以能登陆的前提是localname与服务器的username是一致的。
$ ssh 192.168.1.123
localname@192.168.1.123's password:
此时输入密码即可登陆。
大部分情况下,除非自己是管理员,可能远程登录名与本机名均不一致,此时需要指定登录名,参数-l即可搞定
$ ssh 192.168.1.123 -l username
username@192.168.1.123's password:
此时输入密码即可登陆。
我最初使用的当然就是这种方式了,username@IP地址。
$ ssh username@192.168.1.123
username@192.168.1.123's password:
Last login: Thu Jan 24 19:14:48 2013 from 192.168.111
有些时候可能登陆到服务器仅仅希望执行一些命令,比如看看服务器的时间是正确,服务器的负载如何,服务器的用户谁正在使用,此时可以在最后直接跟上命令,如下,单纯地看看服务器的时间:
$ ssh username@192.168.1.123 date
username@192.168.1.123's password:
Thu Jan 24 21:14:48 2013
还有一些服务器登陆是开放的并不是默认的22端口,有可能是12345端口,此时就需要指定该端口进行登陆,如下:
$ ssh username@192.168.1.123 -p 12345
username@192.168.1.123's password:
Login to remote host
ssh -l jsmith remotehost.example.com
Debug ssh client
ssh -v -l jsmith remotehost.example.com
Display ssh client version
$ ssh -V
OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003
More ssh examples: 5 Basic Linux SSH Client Commands
Linux stat
命令用于显示 inode 内容。
话说这个inode是个什么东西呢?对于存储在硬盘上的文件,特别是Linux的概念就是,一切皆文件。其最小的存储单元为512字节即一个扇区sector;在读取文件的时候,为了提高效率,是按照4KB的块block来读取的,所以这样看来每次读取了8个sector。而对于每个文件为了索引,其元数据的各种信息就是stat
获取的,用于描述创建者、文件的各种日期、大小等等等等信息,这个元数据的id就可以认为是inode了,以上。
官方的定义为:
stat - display file or file system status
用法为:
$ stat [options] filename/directory
其中的参数为:
-L
, --dereference
: 不显示链接的原始文件
-f
, --file-system
:显示文件系统状态
--printf=FORMAT
: 与C语言的类似,不过看着转义符更多一些
-t
, --terse
:超级简介的模式
最简单的其实也是最有用的,直接跟上文件或者目录,如下:
$ stat text.txt
File: ‘text.txt’
Size: 51 Blocks: 8 IO Block: 4096 regular file
Device: fd00h/64768d Inode: 1610934260 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user) Gid: ( 1000/ user)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2012-09-11 21:24:49.660510438 +0800
Modify: 2012-09-09 17:31:54.518005296 +0800
Change: 2012-09-09 17:33:09.670327180 +0800
Birth: -
$ stat dir
File: ‘dir’
Size: 51 Blocks: 0 IO Block: 4096 directory
Device: fd00h/64768d Inode: 1610934255 Links: 2
Access: (0775/drwxrwxr-x) Uid: ( 1000/ user) Gid: ( 1000/ user)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2012-09-13 16:44:56.802331727 +0800
Modify: 2012-09-13 16:44:55.624342864 +0800
Change: 2012-09-13 16:44:55.624342864 +0800
Birth: -
各个段的解释为:
参数-f
将显示文件系统信息,可以看到Type:xfs这个信息。
$ stat -f text.txt
File: "text.txt"
ID: fd0000000000 Namelen: 255 Type: xfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 244020823 Free: 182831648 Available: 182831648
Inodes: Total: 488280064 Free: 487587798
--printf=FORMAT
选项可以跟的FORMAT有很多,较常用为:
格式化字符串 | 含义 |
---|---|
%A | 易读的访问状态 |
%B | 每个块的大小(单位为字节) |
%d | 十进制的设备号 |
%F | 文件类型 |
%G | 所有者的组名 |
%i | inode数字 |
%m | 挂载点 |
%n | 文件名 |
%s | 总大小(单位:字节) |
%U | 所有者的用户名 |
%w | 易读的文件生成时间(大写的为Epoch) |
%x | 易读的文件访问时间(大写的为Epoch) |
%y | 易读的文件修改时间(大写的为Epoch) |
%z | 易读的文件上一次修改状态时间(大写的为Epoch) |
下面的这个命令可以实现类似ls -l
的用法,可以扩展更多,也可以自定义使用,比如alias
等等。
$ stat --print="%A. %U %G %s %x %n \n" text.txt
-rw-rw-r--. user user 51 2012-09-11 21:24:49.660510438 +0800 text.txt
su - user
能切换到一个用户中去执行一个指令或脚本
该命令格式如下所示:
$ su [options...] [-] [user [args...]]
其中一些比较重要的选项如下所示:
-f
, –fast
:快速启动,不读取启动文件,这个取决于具体的shell。-l
, –login
:这个参数让你有焕然一新的感觉,基本类似于重新登录。如果不指定,默认情况下是root环境。-g
,--group
:指定主要组,这个只能由root用户指定。-m
, -p
,–preserve-environment
:保留环境变量,除非指定了-l。-s SHELL
,--shell=SHELL
:切换使用的SHELL。执行如下命令,会切换到user用户,然后执行ls命令
$ su - user -c ls
不同的人,可能对不同的SHELL情有独钟,A喜欢bash,B可能喜欢csh,这个就可以通过-s来切换,如下可以切换到csh
$ su - user -s /bin/csh
关于SHELL,根据安装的环境不同,基本有如下几个:
su [user]
切换到其他用户,但是不切换环境变量,su - [user]
则是完整的切换到新的用户环境。
如:
$ pwd
/root
$ su oper
$ pwd
/root
$ su - oper
Password:
$ pwd
/home/oper
所以大家在切换用户时,尽量用su - [user],否则可能会出现环境变量不对的问题。
Linux sudo
命令以系统管理者的身份执行指令,也就是说,经由 sudo
所执行的指令就好像是 root 亲自执行。
如果希望可以执行这个命令,需要管理员在文件 /etc/sudoers 中增加权限即可。
官方的定义为:
execute a command as another user
$ sudo [ option ] command
参数说明:
-l
或--list
:显示出自己(执行 sudo 的使用者)的权限-k
将会强迫使用者在下一次执行 sudo
时问密码(不论有没有超过 N 分钟)-b
将要执行的指令放在后台执行-p prompt
可以更改问密码的提示语,其中 %u 会代换为使用者的帐号名称, %h 会显示主机名称-u username/uid
不加此参数,代表要以 root 的身份执行指令,而加了此参数,可以以 username 的身份执行指令(uid 为该 username 的使用者号码)-s
执行环境变数中的 SHELL 所指定的 shell ,或是 /etc/passwd 里所指定的 shell-H
将环境变数中的 HOME (家目录)指定为要变更身份的使用者家目录(如不加 -u
参数就是系统管理者 root )command
要以系统管理者身份(或以 -u
更改为其他人)执行的指令如果没有sudo
权限,在执行命令的时候还有下面👇的输出:
$ sudo ls
[sudo] password for username:
username is not in the sudoers file. This incident will be reported.
这个的应用场景为,其他用户在登录,而你具有sudo权限,测试可以通过指定用户名来操作。
$ sudo -u username ls -l
如果不清楚,可以执行那些命令,可以通过参数-l来查看,主要看在sudoer里面的修改。
$ sudo -l
Password:
Matching Defaults entries for user on localhost:
!visiblepw, always_set_home, match_group_by_gid, env_reset, env_keep="COLORS DISPLAY
HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS
LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES",
env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL
LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
User user may run the following commands on localhost:
(ALL) ALL
以root权限执行上一条命令
$ sudo sh -c "cd /home ; du -s * | sort -rn "
15013524344 user1
1170974156 user2
139238772 user3
1382673532 user4
41071068 user5
3523056 user6
top
命令比较像Windows里面的任务管理器,提供一个动态实时的系统状态检测,可以检测实时显示内存、CPU、进程的运行状态,主要在分析系统负载的时候比较常用。
官方定义为:
top - display Linux processes
状态默认实时显示,间隔为1秒。
使用的方法如下(选项超级多,其实不复杂):
$ top -bcHiOSs -d secs -n max -u|U user -p pid -o fld -w [cols]
参数说明:
-d
: 改变显示的更新速度,或是在交互式( interactive command)按 s
或d
-c
: 切换显示模式,共有两种模式,一是只显示执行程序的名称,另一种是显示完整的路径与名称;这个在定位执行命令的时候较常用-n
: 更新的次数,完成后将会退出-b
: 批模式操作,主要用来将 top
的结果输出到其他程序或者文件;-i
: 不显示任何闲置不使用CPU的进程-s
: 安全模式,取消交谈式指令-pN1 -pN2 ... or -pN1,N2,N3 ...
:指定PID模式,仅仅监控N1,N2等信息-u/U user
:仅仅关注user的进程情况在输入top
命令以后,如果希望退出,可以数据q或者直接Ctrl+c即可。
还有一个情况,可以输入h进行帮助查询,用于进一步的交互操作。
通常情况下,最常用的就是输入top
命令,不加任何参数,这种情况下最希望看到的就是最占用系统资源的进程。
如下所示:
$ top
top - 22:23:20 up 461 days, 7:52, 18 users, load average: 1.82, 1.57, 1.45
Tasks: 773 total, 1 running, 768 sleeping, 0 stopped, 4 zombie
%Cpu(s): 10.1 us, 6.5 sy, 0.0 ni, 83.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32664832 total, 668020 free, 15683576 used, 16313236 buff/cache
KiB Swap: 16449532 total, 13409776 free, 3039756 used. 15787188 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7965 dbus 20 0 76092 8456 1704 S 7.5 0.0 40307:04 dbus-daemon
23460 root 20 0 397640 5560 3248 S 4.2 0.0 4738:26 accounts-daemon
4321 user 20 0 821828 104812 4584 S 3.2 0.3 7380:28 gsd-color
此时可以看到系统的基本信息,可以看到分为三个部分:
$ top -c
7965 dbus 20 0 76092 8456 1704 S 7.5 0.0 40307:04 /usr/bin/dbus-daemon
此时省去其他信息,可以看到dbus-daemon增加了路径信息为**/usr/bin/dbus-daemon**
这个命令用于定量显示,比如刷新10次后退出,如下:
$ top -n 10
如果觉得太长或者太短,可以通过-d
来设置,或者在交互模式下输入d
或者s
来设置。
$ top -d 0.8 # 设置为0.8秒
$ top -d 6 # 设置为6秒
如果仅仅对某个进程感兴趣,如下指定PID即可。
$ top -p 1234 # 对进程1234感兴趣
作为管理员or朋友,或许对某个用户感兴趣,比如user,此时可以仅仅显示该用户的进程信息
$ top -u user
… note::
未老莫还乡,还乡须断肠。
宋 韦庄《菩萨蛮 人人尽说江南好》
tac
命令将文件反向输出,刚好与前面的cat
输出相反,cat
命令可用于输出文件的内容到标准输出。
这个命令其实就是cat
的反向输出,😁。
tac
的官方定义为:
tac
- concatenate and print files in reverse
其用法一般为:
$ tac [OPTION]... [FILE]...
tac
命令的可选参数[OPTION]
如下所示:
-b
, --before
:在行前而不是行尾添加分割标志-r
, --regex
:将分割标志作为正则表达式来解析-s
, --separator=STRING
:使用STRING
作为分割标志同样使用前面的hello.c文件,内容为:
#include <stdio.h>
int main(int argc, char * argv[])
{
printf("Hello World\n");
return 0;
}
接下来的实例全部根据这个文件展开,Hello World. Hello Linux
与cat比对输出如下所示:
$ cat hello.c
#include <stdio.h>
int main(int argc, char * argv[])
{
printf("Hello World\n");
return 0;
}
$ tac hello.c
}
return 0;
printf("Hello World\n");
{
int main(int argc, char * argv[])
#include <stdio.h>
其他几个参数用的到时不多,不过搭配起来还是有一些帮助的,比如做一个反序输出,搭配使用-s
和-r
参数,如下:
$ cat 'Hello World.' | tac -r -s "."
.dlroW olleH
这个方法就用到了管道、正则表达式。
tail
命令用来查看文件尾部的n行,如果没有指定的n,默认显示10行。
命令格式:
$ tail [option] [filename]
参数option比较常用的如下所示:
-f
循环读取-c <数目>
显示的字节数-n <行数>
显示文件的尾部 n 行内容假定文件text.txt有20行,从1-20,默认情况下的使用如下:
$ tail text.txt
11
12
13
14
15
16
17
18
19
20
可以通过-n参数来只显示N行,而不是默认的10行,比如15行,如下:
$ tail -n 15 text.txt
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
此时如果希望从第N行显示,而不是显示N行,可以通过下面的参数,比如从第15行显示
$ tail -n +15 text.txt
15
16
17
18
19
20
如果希望显示文件的最后几个字符,比如6个,如下:
$ tail -c 6 text.txt
19
20
# NICE
# 查看文件的后60KB
$ tail -c 60k filename
# 查看文件的后60MB
$ tail -c 60m filename
参数 -f
常常用于查阅正在改变的日志文件。如下面👇所示:
$ tail -f filename
如果filename的内容在增加,那么显示在屏幕上的内容就会一直更新。
… note::
十年磨一剑,霜刃未曾试。
贾岛《剑客 / 述剑》
Linux的tar命令可以用来压缩或者解压缩文件。
官方定义为:
tar
- an archiving utility
$ tar optionA [optionsB] filename
使用该命令时,optionA选项是必须要有的,它告诉tar
要做什么事情,optionsB选项是辅助使用的,可以选用。
其中optionsA主要为:
-c
创建新的档案文件。如果用户想备份一个目录或是一些文件,就要选择这个选项。相当于打包。-x
从档案文件中释放文件。相当于拆包。-t
列出档案文件的内容,查看已经备份了哪些文件。不过需要注意的是,这三个参数仅仅能存在一个。
辅助选项常用的为:
-z
:是否同时具有 gzip 的属性,有的话压缩文件格式为:filename.tar.gz-j
:是否同时具有 bzip2 的属性,有的话压缩文件格式为:filename.tar.bz2-v
:压缩的过程中显示文件,这个基本都需要带上-p
:使用原文件的原来属性(属性不会依据使用者而变)--exclude FILE
:在压缩的过程中,不要将 FILE 打包!接下来的命令为把a,b,c,d压缩到文件test.tar.gz中。
$ tar czvf test.tar.gz a b c d
a
b
c
d
接下来的命令将列出压缩文件的内容,但是不解压,所以可以先确定,再解压不迟
$ tar tzvf test.tar.gz a b c d
-rw-rw-r-- oper/oper 12 2010-05-24 22:51 a
-rw-rw-r-- oper/oper 18 2010-05-24 22:51 b
-rw-rw-r-- oper/oper 15 2010-05-24 22:51 c
-rw-rw-r-- oper/oper 28 2010-05-24 22:51 d
接下来就可以解压操作了。
$ tar zxvf test.tar.gz
a
b
c
d
linux
的tcpdump
命令不常用,一般用来修改文件时间戳(可更改文件或目录的日期时间,包括存取时间和更改时间)或者新建一个不存在的文件。
$ touch [选项]... 文件...
其中选项如下所示:
-a
只更改存取时间。-c
或–no-create 不建立任何文档。-d
使用指定的日期时间,而非现在的时间。-m
只更改变动时间。-r
把指定文件或目录的日期时间,统统设成和参考文件或目录的日期时间相同。-t
使用指定的日期时间,而非现在的时间。$ ls
$ touch a.txt b.txt
$ ls
a.txt b.txt
# 将文件b.txt的时间戳与a.txt保持一致
$ touch -r a.txt b.txt
# 设定filename的时间戳为2012年05月06日13时14分15秒
$ touch -t 201205061314.15 filename
$ ls -l
-rw-rw-r--. 1 user user 0 May 6 2012 filename
其中-t time
使用指定的时间值 time
作为指定文件相应时间戳记的新值.此处的 time
的形式如下为: [[CC]YY]MMDDhhmm[.SS]
其中秒及年可以省略。
Linux的tee
命令可以将输出输出到终端的同时写入文件。
这个命令对于既想试试看到输出保存到文件稍后查看的操作十分有用。
官方定义为:
tee
- read from standard input and write to standard output and files
具体的使用方法为:
$ tee [OPTION]... [FILE]...
参数:
-a, --append
追加到现有文件的后面,而非覆盖它.-i, --ignore-interrupts
忽略中断信号。比如最简单的想查看一下当前有哪些文件并保存到一个日志,如下:
$ ls
a.txt b.txt c.txt d.txt e.txt
$ ls | tee list.log
a.txt b.txt c.txt d.txt e.txt
$ cat list.log
a.txt b.txt c.txt d.txt e.txt
可以看到tee
在保证同时显示在终端上还输出到了文件 list.log中。
tee
当然也是可以同时输出到多个文件的,比如:
$ ls
a.txt b.txt c.txt d.txt e.txt
$ ls | tee list.log listB.log
a.txt b.txt c.txt d.txt e.txt
$ cat list.log
a.txt b.txt c.txt d.txt e.txt
$ cat listB.log
a.txt b.txt c.txt d.txt e.txt
与自己对话如何呢,或者叫做复读机?
tee
命令直接跟文件的话,会等待输入,并同步进行输出到终端和文件的操作。
$ tee test.log
hello
hello
world
world
$ cat test.log
hello
world
?
… note::
林花谢了春红,太匆匆。无奈朝来寒雨晚来风。
Linux time
命令的用途,在于测量指定命令消耗的时间。
最常用的在于大概评估一个程序的运行时间。
这个命令很容易给人的印象是与date混淆起来
官方定义为:
time - time a simple command or give resource usage
可以给出包括系统的粗略时间。
$ time [options] command [arguments...]
参数:
? - 可以认为没有参数
会显示程序或命令执行的消耗时间
$ time ls /var
account crash games lib log ......
real 0m0.014s
user 0m0.003s
sys 0m0.010s
$ time ps -aux
root 295490 0.0 0.0 0 0 ? S Feb20 0:10 [ldlm_cb00_019
root 297717 0.0 0.0 0 0 ? S< Jan29 0:04 [kworker/58:1H
root 304801 0.0 0.0 0 0 ? S Mar19 0:00 [kworker/1:1]
root 311110 0.0 0.0 0 0 ? S Mar20 0:00 [kworker/66:0]
root 313146 0.0 0.0 0 0 ? S Mar20 0:01 [kworker/73:2]
root 313461 0.0 0.0 0 0 ? S< Jan29 0:00 [kworker/44:2H
root 313914 0.0 0.0 0 0 ? S Feb21 0:10 [kworker/9:2]
root 314118 0.0 0.0 0 0 ? S Feb21 3:34 [kworker/18:1]
root 315801 0.0 0.0 0 0 ? S Mar20 0:00 [kworker/79:2]
real 0m0.180s
user 0m0.019s
sys 0m0.114
唯一需要留意的是上面的三个含义:
top
命令比较像Windows里面的任务管理器,提供一个动态实时的系统状态检测,可以检测实时显示内存、CPU、进程的运行状态,主要在分析系统负载的时候比较常用。
官方定义为:
top - display Linux processes
状态默认实时显示,间隔为1秒。
使用的方法如下(选项超级多,其实不复杂):
$ top -bcHiOSs -d secs -n max -u|U user -p pid -o fld -w [cols]
参数说明:
-d
: 改变显示的更新速度,或是在交互式( interactive command)按 s
或d
-c
: 切换显示模式,共有两种模式,一是只显示执行程序的名称,另一种是显示完整的路径与名称;这个在定位执行命令的时候较常用-n
: 更新的次数,完成后将会退出-b
: 批模式操作,主要用来将 top
的结果输出到其他程序或者文件;-i
: 不显示任何闲置不使用CPU的进程-s
: 安全模式,取消交谈式指令-pN1 -pN2 ... or -pN1,N2,N3 ...
:指定PID模式,仅仅监控N1,N2等信息-u/U user
:仅仅关注user的进程情况在输入top
命令以后,如果希望退出,可以数据q或者直接Ctrl+c即可。
还有一个情况,可以输入h进行帮助查询,用于进一步的交互操作。
通常情况下,最常用的就是输入top
命令,不加任何参数,这种情况下最希望看到的就是最占用系统资源的进程。
如下所示:
$ top
top - 22:23:20 up 461 days, 7:52, 18 users, load average: 1.82, 1.57, 1.45
Tasks: 773 total, 1 running, 768 sleeping, 0 stopped, 4 zombie
%Cpu(s): 10.1 us, 6.5 sy, 0.0 ni, 83.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32664832 total, 668020 free, 15683576 used, 16313236 buff/cache
KiB Swap: 16449532 total, 13409776 free, 3039756 used. 15787188 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7965 dbus 20 0 76092 8456 1704 S 7.5 0.0 40307:04 dbus-daemon
23460 root 20 0 397640 5560 3248 S 4.2 0.0 4738:26 accounts-daemon
4321 user 20 0 821828 104812 4584 S 3.2 0.3 7380:28 gsd-color
此时可以看到系统的基本信息,可以看到分为三个部分:
$ top -c
7965 dbus 20 0 76092 8456 1704 S 7.5 0.0 40307:04 /usr/bin/dbus-daemon
此时省去其他信息,可以看到dbus-daemon增加了路径信息为**/usr/bin/dbus-daemon**
这个命令用于定量显示,比如刷新10次后退出,如下:
$ top -n 10
如果觉得太长或者太短,可以通过-d
来设置,或者在交互模式下输入d
或者s
来设置。
$ top -d 0.8 # 设置为0.8秒
$ top -d 6 # 设置为6秒
如果仅仅对某个进程感兴趣,如下指定PID即可。
$ top -p 1234 # 对进程1234感兴趣
作为管理员or朋友,或许对某个用户感兴趣,比如user,此时可以仅仅显示该用户的进程信息
$ top -u user
linux
的touch
命令不常用,一般用来修改文件时间戳(可更改文件或目录的日期时间,包括存取时间和更改时间)或者新建一个不存在的文件。
$ touch [选项]... 文件...
其中选项如下所示:
-a
只更改存取时间。-c
或–no-create 不建立任何文档。-d
使用指定的日期时间,而非现在的时间。-m
只更改变动时间。-r
把指定文件或目录的日期时间,统统设成和参考文件或目录的日期时间相同。-t
使用指定的日期时间,而非现在的时间。$ ls
$ touch a.txt b.txt
$ ls
a.txt b.txt
# 将文件b.txt的时间戳与a.txt保持一致
$ touch -r a.txt b.txt
# 设定filename的时间戳为2012年05月06日13时14分15秒
$ touch -t 201205061314.15 filename
$ ls -l
-rw-rw-r--. 1 user user 0 May 6 2012 filename
其中-t time
使用指定的时间值 time
作为指定文件相应时间戳记的新值.此处的 time
的形式如下为: [[CC]YY]MMDDhhmm[.SS]
其中秒及年可以省略。
Linux tr
命令用于转换或删除字符。
tr
命令可以从标准输入读取数据,经过字符串转译后,将结果输出到标准输出。
官方定义为:
tr
- translate or delete characters
使用方法为:
$ tr [OPTION]... SET1 [SET2]
其中常用的三个选项为:
-d, --delete
:删除指令字符[:lower:]
:所有小写字母[:upper:]
:所有大写字母[:blank:]
:所有空格默认无参数的显示
$ echo "Hello World, Welcome to Linux!" | tr a-z A-Z
HELLO WORLD, WELCOME TO LINUX!
# 还有一种方法
$ echo "Hello World, Welcome to Linux!" | tr [:lower:] [:upper:]
HELLO WORLD!
默认无参数的显示
$ echo "Hello World, Welcome to Linux!" | tr A-Z a-z
hello world, welcome to linux!
# 还有一种方法
$ echo "Hello World, Welcome to Linux!" | tr [:upper:] [:lower:]
hello world, welcome to linux!
很多变量或者函数起名字都会移除元音字符,可以考虑使用-d
参数,如下:
$ echo "Hello World, Welcome to Linux!" | tr -d a,o,e,i
Hll Wrld Wlcm t Lnux!
同理,使用-d
,结合[:blank:]
可以快速删除所有空格。
$ echo "Hello World, Welcome to Linux!" | tr -d [:blank:]
HelloWorld,WelcometoLinux!
tracepath
用于显示报文到达某一个地址的路由信息,能够发现其中的MTU信息。
在探测过程中,会使用UDP端口或随机端口。所以可以看到后面的?符号。与traceroute
类似。
这对于长距离的数据传输分析有很明显的帮助作用。
官方的定义为:
tracepath, tracepath6 - traces path to a network host discovering MTU along this path
使用方法为:
$ tracepath [-n] [-b] [-l pktlen] [-m max_hops] [-p port] destination
其中选项如下所示:
-n
:只显示IP地址信息(默认是显示域名的,这个选项将不显示域名了)-b
:同时显示主机名和IP地址(默认没有域名的只显示IP地址,这个选项即使没有主机名也会把IP地址作为主机名)-l
:设置初始化的数据包长度,默认tracepath
为65535,而tracepaht6
为128000-m
:设置最大的hops(或最大的TTL)为max_hops(默认为30)-p
:设置初始使用的目标端口root@mops:~ $ tracepath6 3ffe:2400:0:109::2
1?: [LOCALHOST] pmtu 1500
1: dust.inr.ac.ru 0.411ms
2: dust.inr.ac.ru asymm 1 0.390ms pmtu 1480
2: 3ffe:2400:0:109::2 463.514ms reached
Resume: pmtu 1480 hops 2 back 2
以其中一行为例:
TTL | 探测信息 |
---|---|
1?: | [LOCALHOST] pmtu 1500 |
1: | dust.inr.ac.ru 0.411ms |
这一列显示探测的TTL,用分号来分割。 不过有些情况下信息不足以确认,就出现了猜测的 ? | 显示网络探测信息: 如果未发送到网络,则为路由器地址或者localhost地址; 这里还会显示MTU、延迟等等等。 |
最后一行会总结整个链路的状态信息,显示了检测到的路径MTU、到达目的地的hops以及从目的地返回的hops数。
可以与ping
配合使用,可以先用ping
获取到具体的IP地址,然后使用tracepath
进行进一步的分析。
$ ping www.bing.com
PING china.bing123.com (202.89.233.101) 56(84) bytes of data.
64 bytes from 202.89.233.101 (202.89.233.101): icmp_seq=1 ttl=116 time=28.1 ms
64 bytes from 202.89.233.101 (202.89.233.101): icmp_seq=2 ttl=116 time=27.9 ms
^C
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 27.964/28.072/28.181/0.199 ms
$ tracepath 202.89.233.101
1?: [LOCALHOST] pmtu 1500
1: no reply
2: 202.127.24.1 2.859ms
...
```---
# Linux的traceroute命令
..note::
少小离家老大回,乡音无改鬓毛衰。
贺知章《回乡偶书二首·其一》
Linux `traceroute`命令用于打印显示数据包到网络主机的路径。
`traceroute`会跟踪从**IP**网络发送到指定主机的路由包,并利用IP协议的生存时间(**TTL**)字段,试图在通往主机路径上的每个网关得到一个ICMP TIME_EXCEEDED响应,由此可得具体的路由信息。
官方的定义为:
> traceroute - print the route packets trace to network host
## 语法
使用方法还挺复杂的,不过常用的不多:
```bash
$ traceroute [-46dFITUnreAV] [-f first_ttl] [-g gate,...]
[-i device] [-m max_ttl] [-p port] [-s src_addr]
[-q nqueries] [-N squeries] [-t tos]
[-l flow_label] [-w waittimes] [-z sendwait] [-UL] [-D]
[-P proto] [--sport=port] [-M method] [-O mod_options]
[--mtu] [--back]
host [packet_len]
显示到达目的地的数据包路由
$ traceroute www.bing.com
traceroute to www.bing.com (202.89.233.101), 30 hops max, 60 byte packets
1 * * *
2 10.12.24.1 (202.127.24.1) 3.487 ms 3.490 ms 4.484 ms
3 * * *
4 192.168.1.53 (192.168.1.53) 4.437 ms 4.435 ms 4.426 ms
5 * * 211.102.30.10 (211.102.30.10) 4.358 ms
6 202.97.63.141 (202.97.63.141) 4.344 ms 202.97.53.117 (202.97.53.117) 3.892 ms 202.97.37.61 (202.97.37.61) 5.872 ms
7 202.97.87.121 (202.97.87.121) 3.902 ms 202.97.87.153 (202.97.87.153) 3.878 ms *
8 202.97.97.233 (202.97.97.233) 35.858 ms 40.803 ms 40.796 ms
9 * 36.110.248.146 (36.110.248.146) 26.951 ms *
10 * 220.181.81.82 (220.181.81.82) 26.931 ms 180.149.128.201 (180.149.128.201) 26.941 ms
11 220.181.17.86 (220.181.17.86) 33.956 ms 220.181.81.10 (220.181.81.10) 26.943 ms *
traceroute6 is equivalent to traceroute -6
tcptraceroute is equivalent to traceroute -T
lft , the Layer Four Traceroute, performs a TCP traceroute, like traceroute -T , but attempts to provide compatibility
with the original such implementation, also called “lft”.
The only required parameter is the name or IP address of the destination host . The optional packet_len`gth is the
total size of the probing packet (default 60 bytes for IPv4 and 80 for IPv6). The specified size can be ignored in
some situations or increased up to a minimal value.
This program attempts to trace the route an IP packet would follow to some internet host by launching probe packets
with a small ttl (time to live) then listening for an ICMP “time exceeded” reply from a gateway. We start our probes
with a ttl of one and increase by one until we get an ICMP “port unreachable” (or TCP reset), which means we got to
the “host”, or hit a max (which defaults to 30 hops). Three probes (by default) are sent at each ttl setting and a
line is printed showing the ttl, address of the gateway and round trip time of each probe. The address can be followed
by additional information when requested. If the probe answers come from different gateways, the address of each re‐
sponding system will be printed. If there is no response within a certain timeout, an “*” (asterisk) is printed for
that probe.
After the trip time, some additional annotation can be printed: !H, !N, or !P (host, network or protocol unreachable),
!S (source route failed), !F (fragmentation needed), !X (communication administratively prohibited), !V (host prece‐
dence violation), !C (precedence cutoff in effect), or ! (ICMP unreachable code ). If almost all the probes
result in some kind of unreachable, traceroute will give up and exit.
We don’t want the destination host to process the UDP probe packets, so the destination port is set to an unlikely
value (you can change it with the -p flag). There is no such a problem for ICMP or TCP tracerouting (for TCP we use
half-open technique, which prevents our probes to be seen by applications on the destination host).
In the modern network environment the traditional traceroute methods can not be always applicable, because of wide‐
spread use of firewalls. Such firewalls filter the “unlikely” UDP ports, or even ICMP echoes. To solve this, some
additional tracerouting methods are implemented (including tcp), see LIST OF AVAILABLE METHODS below. Such methods try
to use particular protocol and source/destination port, in order to bypass firewalls (to be seen by firewalls just as
a start of allowed type of a network session).
OPTIONS
-4, -6 Explicitly force IPv4 or IPv6 tracerouting. By default, the program will try to resolve the name given, and
choose the appropriate protocol automatically. If resolving a host name returns both IPv4 and IPv6 addresses,
traceroute will use IPv4.
-I, --icmp
Use ICMP ECHO for probes
-T, --tcp
Use TCP SYN for probes
-d, --debug
Enable socket level debugging (when the Linux kernel supports it)
-F, --dont-fragment
Do not fragment probe packets. (For IPv4 it also sets DF bit, which tells intermediate routers not to fragment
remotely as well).
Varying the size of the probing packet by the packet_len command line parameter, you can manually obtain infor‐
mation about the MTU of individual network hops. The --mtu option (see below) tries to do this automatically.
Note, that non-fragmented features (like -F or --mtu) work properly since the Linux kernel 2.6.22 only. Before
that version, IPv6 was always fragmented, IPv4 could use the once the discovered final mtu only (from the route
cache), which can be less than the actual mtu of a device.
-f first_ttl, --first=first_ttl
Specifies with what TTL to start. Defaults to 1.
-g gateway, --gateway=gateway
Tells traceroute to add an IP source routing option to the outgoing packet that tells the network to route the
packet through the specified gateway (most routers have disabled source routing for security reasons). In gen‐
eral, several gateway’s is allowed (comma separated). For IPv6, the form of num,addr,addr… is allowed, where
num is a route header type (default is type 2). Note the type 0 route header is now deprecated (rfc5095).
-i interface, --interface=interface
Specifies the interface through which traceroute should send packets. By default, the interface is selected ac‐
cording to the routing table.
-m max_ttl, --max-hops=max_ttl
Specifies the maximum number of hops (max time-to-live value) traceroute will probe. The default is 30.
-N squeries, --sim-queries=squeries
Specifies the number of probe packets sent out simultaneously. Sending several probes concurrently can speed
up traceroute considerably. The default value is 16.
Note that some routers and hosts can use ICMP rate throttling. In such a situation specifying too large number
can lead to loss of some responses.
-n Do not try to map IP addresses to host names when displaying them.
-p port, --port=port
For UDP tracing, specifies the destination port base traceroute will use (the destination port number will be
incremented by each probe).
For ICMP tracing, specifies the initial ICMP sequence value (incremented by each probe too).
For TCP and others specifies just the (constant) destination port to connect. When using the tcptraceroute
wrapper, -p specifies the source port.
-t tos, --tos=tos
For IPv4, set the Type of Service (TOS) and Precedence value. Useful values are 16 (low delay) and 8 (high
throughput). Note that in order to use some TOS precedence values, you have to be super user.
For IPv6, set the Traffic Control value.
-l flow_label, --flowlabel=flow_label
Use specified flow_label for IPv6 packets.
-w max[,here,near], --wait=max[,here,near]
Determines how long to wait for a response to a probe.
There are three (in general) float values separated by a comma (or a slash). Max specifies the maximum time
(in seconds, default 5.0) to wait, in any case.
Traditional traceroute implementation always waited whole max seconds for any probe. But if we already have
some replies from the same hop, or even from some next hop, we can use the round trip time of such a reply as a
hint to determine the actual reasonable amount of time to wait.
The optional here (default 3.0) specifies a factor to multiply the round trip time of an already received re‐
sponse from the same hop. The resulting value is used as a timeout for the probe, instead of (but no more than)
max. The optional near (default 10.0) specifies a similar factor for a response from some next hop. (The time
of the first found result is used in both cases).
First, we look for the same hop (of the probe which will be printed first from now). If nothing found, then
look for some next hop. If nothing found, use max. If here and/or near have zero values, the corresponding
computation is skipped.
Here and near are always set to zero if only max is specified (for compatibility with previous versions).
-q nqueries, --queries=nqueries
Sets the number of probe packets per hop. The default is 3.
-r Bypass the normal routing tables and send directly to a host on an attached network. If the host is not on a
directly-attached network, an error is returned. This option can be used to ping a local host through an in‐
terface that has no route through it.
-s source_addr, --source=source_addr
Chooses an alternative source address. Note that you must select the address of one of the interfaces. By de‐
fault, the address of the outgoing interface is used.
-z sendwait, --sendwait=sendwait
Minimal time interval between probes (default 0). If the value is more than 10, then it specifies a number in
milliseconds, else it is a number of seconds (float point values allowed too). Useful when some routers use
rate-limit for ICMP messages.
-e, --extensions
Show ICMP extensions (rfc4884). The general form is CLASS/TYPE: followed by a hexadecimal dump. The MPLS
(rfc4950) is shown parsed, in a form: MPLS:L=label,E=exp_use,S=stack_bottom,T=TTL (more objects separated by /
).
-A, --as-path-lookups
Perform AS path lookups in routing registries and print results directly after the corresponding addresses.
There are additional options intended for advanced usage (such as alternate trace methods etc.):
–sport=port
Chooses the source port to use. Implies -N 1 -w 5 . Normally source ports (if applicable) are chosen by the
system.
–fwmark=mark
Set the firewall mark for outgoing packets (since the Linux kernel 2.6.25).
-M method, --module=name
Use specified method for traceroute operations. Default traditional udp method has name default, icmp (-I) and
tcp (-T) have names icmp and tcp respectively.
Method-specific options can be passed by -O . Most methods have their simple shortcuts, (-I means -M icmp,
etc).
-O option, --options=options
Specifies some method-specific option. Several options are separated by comma (or use several -O on cmdline).
Each method may have its own specific options, or many not have them at all. To print information about avail‐
able options, use -O help.
-U, --udp
Use UDP to particular destination port for tracerouting (instead of increasing the port per each probe). De‐
fault port is 53 (dns).
-UL Use UDPLITE for tracerouting (default port is 53).
-D, --dccp
Use DCCP Requests for probes.
-P protocol, --protocol=protocol
Use raw packet of specified protocol for tracerouting. Default protocol is 253 (rfc3692).
–mtu Discover MTU along the path being traced. Implies -F -N 1. New mtu is printed once in a form of F=NUM at the
first probe of a hop which requires such mtu to be reached. (Actually, the correspond “frag needed” icmp mes‐
sage normally is sent by the previous hop).
Note, that some routers might cache once the seen information on a fragmentation. Thus you can receive the fi‐
nal mtu from a closer hop. Try to specify an unusual tos by -t , this can help for one attempt (then it can be
cached there as well).
See -F option for more info.
–back Print the number of backward hops when it seems different with the forward direction. This number is guessed in
assumption that remote hops send reply packets with initial ttl set to either 64, or 128 or 255 (which seems a
common practice). It is printed as a negate value in a form of ‘-NUM’ .
LIST OF AVAILABLE METHODS
In general, a particular traceroute method may have to be chosen by -M name, but most of the methods have their simple
cmdline switches (you can see them after the method name, if present).
default
The traditional, ancient method of tracerouting. Used by default.
Probe packets are udp datagrams with so-called “unlikely” destination ports. The “unlikely” port of the first probe
is 33434, then for each next probe it is incremented by one. Since the ports are expected to be unused, the destina‐
tion host normally returns “icmp unreach port” as a final response. (Nobody knows what happens when some application
listens for such ports, though).
This method is allowed for unprivileged users.
icmp -I
Most usual method for now, which uses icmp echo packets for probes.
If you can ping(8) the destination host, icmp tracerouting is applicable as well.
This method may be allowed for unprivileged users since the kernel 3.0 (IPv4, for IPv6 since 3.11), which supports new
dgram icmp (or “ping”) sockets. To allow such sockets, sysadmin should provide net/ipv4/ping_group_range sysctl range
to match any group of the user.
Options:
raw Use only raw sockets (the traditional way).
This way is tried first by default (for compatibility reasons), then new dgram icmp sockets as fallback.
dgram Use only dgram icmp sockets.
tcp -T
Well-known modern method, intended to bypass firewalls.
Uses the constant destination port (default is 80, http).
If some filters are present in the network path, then most probably any “unlikely” udp ports (as for default method)
or even icmp echoes (as for icmp) are filtered, and whole tracerouting will just stop at such a firewall. To bypass a
network filter, we have to use only allowed protocol/port combinations. If we trace for some, say, mailserver, then
more likely -T -p 25 can reach it, even when -I can not.
This method uses well-known “half-open technique”, which prevents applications on the destination host from seeing our
probes at all. Normally, a tcp syn is sent. For non-listened ports we receive tcp reset, and all is done. For active
listening ports we receive tcp syn+ack, but answer by tcp reset (instead of expected tcp ack), this way the remote tcp
session is dropped even without the application ever taking notice.
There is a couple of options for tcp method:
syn,ack,fin,rst,psh,urg,ece,cwr
Sets specified tcp flags for probe packet, in any combination.
flags=num
Sets the flags field in the tcp header exactly to num.
ecn Send syn packet with tcp flags ECE and CWR (for Explicit Congestion Notification, rfc3168).
sack,timestamps,window_scaling
Use the corresponding tcp header option in the outgoing probe packet.
sysctl Use current sysctl (/proc/sys/net/*) setting for the tcp header options above and ecn. Always set by default,
if nothing else specified.
mss=num
Use value of num for maxseg tcp header option (when syn).
info Print tcp flags of final tcp replies when the target host is reached. Allows to determine whether an applica‐
tion listens the port and other useful things.
Default options is syn,sysctl.
tcpconn
An initial implementation of tcp method, simple using connect(2) call, which does full tcp session opening. Not recom‐
mended for normal use, because a destination application is always affected (and can be confused).
udp -U
Use udp datagram with constant destination port (default 53, dns).
Intended to bypass firewall as well.
Note, that unlike in tcp method, the correspond application on the destination host always receive our probes (with
random data), and most can easily be confused by them. Most cases it will not respond to our packets though, so we
will never see the final hop in the trace. (Fortunately, it seems that at least dns servers replies with something an‐
gry).
This method is allowed for unprivileged users.
udplite -UL
Use udplite datagram for probes (with constant destination port, default 53).
This method is allowed for unprivileged users.
Options:
coverage=num
Set udplite send coverage to num.
dccp -D
Use DCCP Request packets for probes (rfc4340).
This method uses the same “half-open technique” as used for TCP. The default destination port is 33434.
Options:
service=num
Set DCCP service code to num (default is 1885957735).
raw -P proto
Send raw packet of protocol proto.
No protocol-specific headers are used, just IP header only.
Implies -N 1 -w 5 .
Options:
protocol=proto
Use IP protocol proto (default 253).
NOTES
To speed up work, normally several probes are sent simultaneously. On the other hand, it creates a “storm of pack‐
ages”, especially in the reply direction. Routers can throttle the rate of icmp responses, and some of replies can be
lost. To avoid this, decrease the number of simultaneous probes, or even set it to 1 (like in initial traceroute im‐
plementation), i.e. -N 1
The final (target) host can drop some of the simultaneous probes, and might even answer only the latest ones. It can
lead to extra “looks like expired” hops near the final hop. We use a smart algorithm to auto-detect such a situation,
but if it cannot help in your case, just use -N 1 too.
For even greater stability you can slow down the program’s work by -z option, for example use -z 0.5 for half-second
pause between probes.
To avoid an extra waiting, we use adaptive algorithm for timeouts (see -w option for more info). It can lead to prema‐
ture expiry (especially when response times differ at times) and printing “*” instead of a time. In such a case,
switch this algorithm off, by specifying -w with the desired timeout only (for example, -w 5).
If some hops report nothing for every method, the last chance to obtain something is to use ping -R command (IPv4, and
for nearest 8 hops only).—
Linux tree
命令以树状图列出目录的内容。
执行tree
指令,它会列出指定目录下的所有文件,包括子目录里的文件。
官方定义为:
tree
- list contents of directories in a tree-like format.
使用方法为:
$ tree [-acdfghilnpqrstuvxACDFQNSUX] [-L level [-R]] [-H baseHREF] [-T title] [-o filename] [--nolinks] [-P pattern] [-I pat‐
tern] [--inodes] [--device] [--noreport] [--dirsfirst] [--version] [--help] [--filelimit #] [--si] [--prune] [--du]
[--timefmt format] [--matchdirs] [--fromfile] [--] [directory ...]
参数比较多,也比较复杂。其中常用的选项为:
-d
显示目录名称而非内容。-D
列出文件或目录的更改时间。?
默认显示当前目录的信息,比如tree和tree .的含义一样。命令有如下输出结果:
$ tree
.
├── a
├── aa
│ ├── aab
│ ├── aac
│ ├── aad
│ └── aae
├── b
├── bb
│ └── bbb
├── c
├── d
├── e
└── f
2 directories, 11 files
$ tree -d
.
├── aa
└── bb
$ tree -D
.
├── [Apr 7 22:34] a
├── [Apr 7 22:37] aa
│ ├── [Apr 7 22:35] aab
│ ├── [Apr 7 22:35] aac
│ ├── [Apr 7 22:35] aad
│ └── [Apr 7 22:35] aae
├── [Apr 7 22:34] b
├── [Apr 7 22:39] bb
│ └── [Apr 7 22:39] bbb
├── [Apr 7 22:34] c
├── [Apr 7 22:34] d
├── [Apr 7 22:33] e
└── [Apr 7 22:33] f
2 directories, 11 files
默认情况下tree可能没有安装,可以通过
apt
/yum
install tree来安装。—
Linux tty
命令用于显示终端机连接标准输入设备的文件名称。
在Linux操作系统中,所有外围设备都有其名称与代号,这些名称代号以特殊文件的类型存放于/dev目录下。
比如ttyN就是今天说的设备,而sddN等就是硬盘设备。
你可以执行tty
(teletypewriter)指令查询目前使用的终端机的文件名称。
官方定义为:
tty - print the file name of the terminal connected to standard input
使用方法比较简单:
$ tty [-s][--help][--version]
对于-s选项就是–silent,–quiet,即屏蔽掉输出,仅仅显示一个退出状态。
默认情况下显示当前终端
$ tty
/dev/pts/4
在Linux里面输入who
可以看到目前登陆的用户,而输出信息包括用户名,tty终端,及登陆的时间信息等等。
$ who
user1 pts/4 2017-04-21 19:58 (xxx.xxx.xxx.xxx)
user1 pts/5 2017-04-07 13:41 (:99)
user1 pts/0 2017-04-08 16:31 (:99)
user1 pts/1 2017-04-08 17:12 (:99)
user1 :0 2017-04-15 15:05 (:0)
user1 pts/2 2017-04-15 15:38 (:0)
user2 pts/3 2017-04-16 08:53 (:3)
user2 pts/6 2017-04-16 11:01 (:3)
user2 pts/7 2017-04-16 16:49 (:3)
user3 pts/8 2017-04-21 20:05 (xxx.xxx.xxx.xxx)
user3 pts/9 2017-04-21 20:07 (xxx.xxx.xxx.xxx)
而如前面所说,对于write
命令其中有一个参数就是指定ttyN的信息。
… note::
离离原上草,一岁一枯荣。
白居易《草 / 赋得古原草送别》
对于高并发或者频繁读写文件的应用程序而言,有时可能需要修改系统能够打开的最多文件句柄数,否则就可能会出现too many open files的错误。
而句柄数分为系统总限制和单进程限制。可以使用ulimit -n
来查看系统对单个进程的限制及可以打开的文件数目。
或者执行ulimit -a
来查看所有的详细信息。
对于临时的修改而言,可以终端中输入下面的命令,将该值调整为65536.
$ ulimit -HSn 65535
上面的命令将open files修改为65535,不过退出当前shell后即失效。
H和S分别表示硬限制和软限制
如果希望永久修改,需要修改配置文件 /etc/security/limits.conf
,修改后需要重新启动系统。
* soft nofile 65535
* hard nofile 65535
其中的*表示所有的用户,soft和hard分别表示软硬限制,nofile表示能够打开的最大文件数,第四列为具体的值。其中具体的值有一个上次,在文件/proc/sys/fs/nr_open
,默认为1048576,完全够用了。
上面讨论的均为单个线程的限制,属于线程级别的,系统级别的限制在文件/proc/sys/fs/file-max
文件中。
修改这个文件也是临时生效的,重启失效,如果希望永久生效,需要修改下面文件:
/etc/sysctl.conf
可以添加下面这行
fs.file-max = 6815744
然后运行sysctl -p
或者重启生效。可以通过lsof -p PID
来查看单个进程打开的文件句柄
Linux uname
命令用于打印系统信息。
uname
可显示电脑、操作系统、发行版本等等信息。
官方的定义为:
uname - print system information
使用的方法为:
$ uname [OPTION]...
常用的一些选项为:
-a, --all
:打印全部的信息-s, --kernel-name
:打印内核名-n, --nodename
:打印网络节点hostnme,即主机名-r, --kernel-release
:打印内核发行版-v, --kernel-version
:打印内核版本-m, --machine
:打印机器的硬件名字-p, --processor
:打印processor或者unknown-i, --hardware-platform
:打印硬件平台或者“unknown”-o, --operating-system
:打印操作系统显示系统信息,这个基本足矣
$ uname -a
Linux localdomain 3.10.0-1160.36.2.el7.x86_64 #1 SMP Wed Jul 21 11:57:15 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
显示计算机类型:
$ uname -m
x86_64
显示计算机名:
$ uname -n
locaodomain
显示操作系统发行编号:
$ uname -r
3.10.0-1160.36.2.el7.x86_64
显示操作系统名称:
$ uname -s
Linux
显示系统版本与时间:
$ uname -v
#1 SMP Wed Jul 21 11:57:15 UTC 2020
SEE ALSO
arch(1), uname(2)
… note::
安得有车马,尚无渔与樵。
宋·王安石《游章义寺》
Linux uniq
命令用于检查及删除文本文件中重复出现的行列,一般与 sort
命令结合使用。
官方定义为:
uniq
- report or omit repeated lines
uniq 可检查文本文件中重复出现的行列。
语法比较简单,直接用就可以。
$ uniq [OPTION]... [INPUT [OUTPUT]]
常用的参数为:
-c
或--count
在每列旁边显示该行重复出现的次数。
-d
或--repeated
仅显示重复出现的行列。
-u
或--unique
仅显示出一次的行列。
假定有1个文件为testfile,内容如下:
testfile
Hello 1
Hello 2
Hello 2
Hello 3
Hello 3
Hello 3
Hello 4
Hello 4
Hello 4
Hello 4
使用uniq 命令可以删除重复的行,不管有多少重复的行,仅仅显示一行。
$ uniq testfile
Hello 1
Hello 2
Hello 3
Hello 4
如果希望统计每一行出现的频次,可以使用-c
参数,其中第一行输出为出现的次数
$ uniq -c testfile
1 Hello 1
2 Hello 2
3 Hello 3
4 Hello 4
在某些情况下,或许只想看到有重复的列,使用-d
参数 :
$ uniq -d testfile
Hello 2
Hello 3
Hello 4
而某些情况下,或许只想看到不重复的列,使用-u
参数:
$ uniq -u testfile
Hello 1
… _linux-beginner-unzip:
Linux unzip
命令用于解压缩zip文件。
官方的定义为:
unzip - list, test and extract compressed files in a ZIP archive
$ unzip file.zip
unzip
只需在命令后跟上要解压的文件名,如 file.zip
,将该压缩文件解压缩到当前目录。
如果需要指定解压缩的目标目录,可以使用 -d
参数:
$ unzip archive.zip -d /path/where/to/extract
这样就会把压缩文件解压到指定的目录中。
如果压缩的文件巨大,而不想解压其中的某些,可以用下面的命令
$ unzip file.zip -x data
这个命令的意思为,解压file.zip,但是不把里面的data解压。
useradd
用于创建或者更新用户账号信息,是管理员必备的命令之一。
官方的定义为:
useradd - create a new user or update default new user information
使用的方法为:
$ useradd [options] LOGIN
$ useradd -D
$ useradd -D [options]
在使用 -D 选项的时候,useradd 命令将使用系统默认、用户命令行指定的参数创建一个新的用户账户。依赖于命令行选项,useradd命令会更新系统文件或者创建用户的home目录并拷贝初始文件,这个除非相当专业,慎用。
默认情况下,useradd会创建一个同名的group。
常用的一些选项为:
-c, --comment COMMENT
:备注,通常会报错在passwd的备注栏中,一般为用户的全名。-d, --home-dir HOME_DIR
:指定用户登陆时候的HOME目录-e, --expiredate EXPIRE_DATE
:用户账户被禁用的日期,格式为: YYYY-MM-DD。如果不指定,将使用 /etc/default/useradd的值,或者默认取空不过期-s, --shell SHELL
:指定登陆后使用的shell,对于不同于默认设定的shell比较有用$ sudo useradd username
$ id username
uid=1001(username) gid=1001(username) groups=1001(username)
正常情况下,创建用户user,会自动在/home目录创建,通过id
命令可以看到有同名的group也创建了。
$ sudo useradd username -c "USER NAME"
通过这个参数可以设置用户的备注名或者昵称,可以在/etc/passwd中看到,这个对于用户管理而言很方便,而GUI登陆来说比较方便,会显示备注名。
默认情况下创建的目录位于/home ,但是如果希望更改到,比如/home1,那么此时使用-d参数即可,如下:
$ sudo useradd -d /home1/ username
有些用户可能对csh情有独钟,那么此时可以使用-s来更改,如下:
$ sudo useradd -s /usr/bin/csh username
目前默认均为bash。
这个选项通常对于临时账户很有效,比如来了一个实习生,实习一个月就离开,此时2013-03-07,那么一个月以后失效的命令为:
$ sudo useradd username -e 2013-04-07
那么一个月以后,该账户将被禁用登陆。
userdel
用于删除用户账号信息,是管理员必备的命令之一。
userdel
将删除用户帐号与相关的文件。若不加参数,则仅仅删除用户帐号,账号的目录可能还会存在。
官方的定义为:
userdel - delete a user account and related files
使用的方法为:
$ userdel [options] LOGIN
其中LOGIN为将删除的用户名,需要确保其存在,不然会报错。
其中很常用的options为:
-r, --remove
:删除用户登陆的目录以及目录中所有的文件,还有用户的邮件信息,在其他文件系统的文件可能需要手动删除。-f, --force
:这个选项强制删除用户账号,即便该用户仍在登陆。同时还会删除用户的home目录和mail信息。总之很彪悍的一个参数,可能会引起其他问题,慎用慎用,不用不用。删除用户账号user,这个选项将把
$ sudo userdel username
$ sudo userdel -r username
-r
参数将把用户的账号以及默认位于/home/username/的所有文件进行删除,谨慎操作,无法找回,除非确认该账号确实不再使用,并且文件确实不在具备价值。
userdel命令是有返回信息的,如果需要确认命令的执行情况,如下返回值:
警告:如果一个用户还有程序在运行,
userdel
是不允许删除该账户的。此时可以通过kill掉改程序,或者使用-f来强制删除。通常情况下,不要这么做。—
Linux usermod命令用于修改用户账号的各种设置,在多群组权限的情况下,十分常用。
官方定义为:
usermod - modify a user account
用法为:
$ usermod [options] LOGIN
常用的几个参数为:
-a
追加用户组,通常与-G
一起使用
-c COMMENT
修改用户帐号的备注文字
-e YYYY-MM-DD
修改帐号的有效期限。
-g newgroup
修改用户所属的群组。
-G groups
修改用户所属的附加群组。
正常情况下在创建用户的时候,不太会指定全名,此时可以使用-c
来补全备注。
$ usermod -c "Full Name" user
上面的命令将用户user的备注更改为Full Name。
可以通过-e
参数来指定账号的有效期,特别是在知道用户用过一段时间后就不在使用,这种情况十分有效。
$ usermod -e 2015-12-12 user
上面的命令将用户user的有效期定义到2015年12月12日。
参数-g
将把用户的默认属组更新。
$ usermod -g newgroup user
上面的命令为把user默认组更改为newgroup。正常情况下,用户将在创建的时候默认创建一个同名的群组。
这个指令用的是最多的,也就是把用户同时追加到其他组,如下所示:
$ usermod -a -G group1 group2 group3 user
含义为把用户user同时追加到用户组group1、group2和group3。
… code::
去年花里逢君别,今日花开已一年。
韦应物《寄李儋元锡》
w
可以认为是加强版的who
,果然越简洁越强大,就比如less
比more
是功能更多的。
w
不仅可以显示谁在登录,还可以打印他们在做什么。w
显示的信息如下:
官方定义为:
w - Show who is logged on and what they are doing.
用法为:
$ w [options] user [...]
常用的两个选项为:
-h
不显示各栏位的标题信息列。
-s
简洁格式列表,不显示用户登入时间,JCPU或者PCPU的时间
显示当前用户的登录信息及执行的命令
$ w
16:29:03 up 26 days, 2:49, 6 users, load average: 1.00, 0.97, 0.96
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
user pts/4 :1 07Sep21 20days 9:59 1:53m bash
user pts/0 :2 08Sep21 6days 0.70s 1:53m zsh
user pts/1 :3 08Sep21 20days 1:13m 1:53m bash
user :0 :0 15Sep21 6days 27days 21.36s zsh
user pts/2 :0 15Sep21 14days 0.25s 0.25s zsh
user pts/3 :3 16Sep21 24:45m 0.22s 0.22s bash
$ w -h
16:29:16 up 26 days, 2:49, 6 users, load average: 1.20, 0.67, 0.76
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
user pts/4 :1 07Sep21 20days 9:59 1:53m bash
user pts/0 :2 08Sep21 6days 0.70s 1:53m zsh
user pts/1 :3 08Sep21 20days 1:13m 1:53m bash
user :0 :0 15Sep21 6days 27days 21.36s zsh
user pts/2 :0 15Sep21 14days 0.25s 0.25s zsh
user pts/3 :3 16Sep21 24:45m 0.22s 0.22s bash
$ w -s
16:29:26 up 26 days, 2:49, 6 users, load average: 1.50, 0.67, 0.36
USER TTY FROM IDLE WHAT
user pts/4 :1 20days bash
user pts/0 :2 6days zsh
user pts/1 :3 20days bash
user :0 :0 6days zsh
user pts/2 :0 14days zsh
user pts/3 :3 24:45m bash
Linux wall
命令会将信息传给每一个 mesg
设定为 yes 的上线使用者(可以输入mesg
,如果返回is yes就可以收到)。当使用终端登陆的时候,可以使用EOF (通常用 Ctrl+D)。所有人均可以使用该命令。
官方的定义为:
wall – send a message to everybody’s terminal.
所以wall应该是write all user’s teminal的缩写。
使用的方法为:
$ wall [-n] [ message ]
其中参数-n
的含义为,修改显示的广播信息放松抬头,看示例即可明白。
这个命令的使用场景为如果需要升级维护系统,可以通过wall命令通知所有在线的用户。
如下:
$ wall
Dear all,
We want to make you aware that this weekend 12PM CST,
there will be scheduled down time for approximately 6 hours.
During this time we will add more capacity and software update
to our infrastructure and service.
Please save all your works and logout for safe.
See you next week.
Regards,
Admin
Ctrl+D #退出
所有登陆的终端都会收到这个消息:
Broadcast message from user@localhost (pts/4) (Mon Apr 18 22:02:22 2011):
Dear all,
We want to make you aware that this weekend 12PM CST,
there will be scheduled down time for approximately 6 hours.
During this time we will add more capacity and software update
to our infrastructure and service.
Please save all your works and logout for safe.
See you next week.
Regards,
Admin
需要注意的是,这个命令最大支持20行的信息,超过了就不会广播了。
如果使用-n
参数的效果如下
$ wall -n 'hello'
# 其他终端用户收到的消息
Remote broadcast message (Mon Apr 18 22:05:22 2011):
hello
可以看到此时的通知抬头变成了 Remote broadcast message,去掉了是哪个用户发送的消息。
wc命令可以查看一下文件的行数、字数、字符数的信息。
官方定义为:
wc - print newline, word, and byte counts for each file
$ wc [-clw][--help][--version][文件...]
参数:
-c
或--bytes
或--chars
只显示Bytes数。-l
或--lines
显示行数。-w
或--words
只显示字数。-L
或--max-line-length
打印最长一行的长度在默认的情况下,wc将计算指定文件的行数、字数,以及字节数。使用的命令为:
$ wc file1
先查看file1文件的内容,可以看到:
$ cat file1
Hello World!
$ wc file1 # file1文件的统计信息
1 2 13 file1 # file1文件的行数为1、单词数2、字节数13
其中,3 个数字分别表示file1文件的行数、单词数,以及该文件的字节数。
如果想同时统计多个文件的信息,例如同时统计file1、file2、file3,可使用如下命令:
$ wc file*
1 2 13 file1
2 5 33 file2
4 16 76 file3
7 23 122 total # 总计输出信息
这个对于终端输出比较有用,要知道以前的终端最长支持80个字符。
其实当前倒是没有这个限制,不过稍短一些的代码看着还是赏心悦目的。
比如查看系统的版本:
$ wc -L /etc/redhat-release
40 /etc/redhat-release
可知这一行的最长为40个字符。
而此时我们就可以使用这个技巧来获取一个工程所有文件最长的是多少。
Linux
系统中的wget
是一个下载文件📀的命令行工具,特别普遍 。
对于Linux
用户是必不可少的工具,对于经常要下载一些软件或从远程服务器恢复备份到本地服务器,这个命令尤为重要。
wget
支持很多协议,比如HTTP
,HTTPS
和FTP
协议,还可以使用HTTP
代理。
wget
的有诸多特点,比如
wget
支持自动下载,即wget
可以在用户退出系统的之后在后台执行。这意味着你可以登录系统,启动一个wget
下载任务,然后退出系统,wget
将在后台执行直到任务完成,这是个牛气冲天的功能。wget
可以跟踪HTML
页面上的链接依次下载来创建远程服务器的本地版本,完全重建原始站点的目录结构。这又常被称作”递归下载”。在递归下载的时候,wget
遵循Robot Exclusion标准(/robots.txt). wget
可以在下载的同时,将链接转换成指向本地文件,以方便离线浏览。wget
非常稳定,它在带宽很窄的情况下和不稳定网络中有很强的适应性.如果是由于网络的原因下载失败,wget
会不断地尝试,直到整个文件下载完毕。如果是服务器打断下载过程,它会再次联到服务器上从停止的地方继续下载。这对从那些限定了链接时间的服务器上下载大文件非常有用。$ wget [参数] [URL地址]
用于从网络上下载资源,没有指定目录,下载资源会默认为当前目录。wget
虽然功能强大,但是使用起来还是比较简单:
wget
的命令参数很多,不过常用的为下面几个,详细的可以看进阶。
比如,我们下载个Ubuntu的最新版本,试下效果如何
$ wget http://releases.ubuntu.com/16.04/ubuntu-16.04-desktop-amd64.iso
在下载的过程中会显示进度条,包含(下载完成百分比,已经下载的字节,当前下载速度,剩余下载时间)。
这个对于动态链接的下载比较有用,特别是有些文件的名字实在是太…长了
$ wget -O wordpress.zip http://www.ubuntu.com/download.aspx?id=1234
$ wget -c http://releases.ubuntu.com/16.04/ubuntu-16.04-desktop-amd64.iso
# or
$ wget --continue http://releases.ubuntu.com/16.04/ubuntu-16.04-desktop-amd64.iso
使用wget -c重新启动下载中断的文件,对于我们下载大文件时突然由于网络等原因中断非常有帮助,我们可以继续接着下载而不是重新下载一个文件。需要继续中断的下载时可以使用-c参数。
$ wget -o download.log URL
不希望下载信息直接显示在终端而是在一个日志文件,可以使用,特别注意需要与-O
来区分开~
其实整个命令已出现,你的脑海里面应该浮现的是:
What is your name?
如题所述,这个命令用于查询一个命令到底执行了什么功能,并将查询的结果输出出来,相当于man
的一个选项-f
。
whatis
的官方定义为:
whatis - display manual page descriptions
仅仅提供一个比较简单的命令描述.
使用方法也比较简单,如下:
$ whatis [options] name
其中的name可以是Linux命令、系统调用、库函数、系统等等内容
以前面的命令为例,执行如下所示:
$ whatis ls cd file cat more less
ls (1) - list directory contents
ls (1p) - list directory contents
cd (1) - bash built-in commands, see bash(1)
cd (1p) - change the working directory
cd (n) - Change working directory
file (1) - determine file type
file (1p) - determine file type
file (n) - Manipulate file names and attributes
cat (1) - concatenate files and print on the standard output
cat (1p) - concatenate and print files
more (1) - file perusal filter for crt viewing
more (1p) - display files on a page-by-page basis
less (1) - opposite of more
less (3pm) - perl pragma to request less of something
可以看到whatis是支持同时查询多个命令的
whatis
可以通过-w
、-r
以及-C
等选项来设定通配符、正则表达式以及配置文件等等,不过最简单的还是简单查看一个命令的简单描述,其他的可以交给man
来处理。
… note::
你在哪里,哪里就是风景。
?
Linux whereis
命令用于定位查找一个命令的二进制、源文件或帮助文件。
不过这些文件一般是位于特定目录的。
其他的程序定位可以考虑使用locate
命令。
官方的定义为:
whereis - locate the binary, source, and manual page files for a command
使用语法如下:
$ whereis [options] [-BMS directory... -f] name...
其他的选项可以为:
-b
: 查找二进制文件
-m
:查找手册
-s
:查找源文件
-B <directory>
在设置的目录下查找二进制文件。
-M <directory>
在设置的目录下查找说明文件。
-S <directory>
在设置的目录下查找原始代码文件。
比如查找bash
的位置,输入如下命令:
$ whereis bash
bash: /usr/bin/bash /etc/bash.bashrc /usr/share/man/man1/bash.1.gz
可以看到,以上的输出信息从左至右分别为程序名、bash路径、bash的man帮助手册路径。
可以通过不同的参数来查找不同的文件,如下:
# 查找二进制文件
$ whereis -b bash
bash: /usr/bin/bash /etc/bash.bashrc
# 查找帮助文件
$ whereis -m bash
bash: /usr/share/man/man1/bash.1.gz
# 查找源文件
$ whereis -s bash
bash:
Linux which
命令用于查找一个命令,不像find
,find
是用来查找文件的。
官方定义为:
which
- locate a command
改名了会在当前环境变量中查找符合条件的命令。
$ which [-a] filename ...
这个命令基本没有参数,只有一个:
-a
print all matching pathnames of each argument如果找到相关的指令并可执行,将返回0.
查找命令并显示具体路径:
$ which bash
/usr/bin/bash
可能会输出不同,取决于环境变量。
一个命令,可能会有多个版本,或者同一个版本的多个位置,可以使用-a参数来检