您的位置:时时app平台注册网站 > 彩世界网址 > ONLYOFFICE界面汉化,onlyoffice汉化

ONLYOFFICE界面汉化,onlyoffice汉化

2019-11-03 08:47

ONLYOFFICE分界面汉化,onlyoffice汉化

上边链接里相近介绍了怎么着汉化和支付。

自己用golang的beego框架开垦了文书档案管理,实现实时文书档案合营。

先是是安装docker,然后是拉取document server镜像,再正是汉化分界面,最终是用golang提供回调理仓库储存编辑后的文书档案。

汉化的步调:1、删除容器里的文本,替换windows下的字体。

 删除容器 /usr/share/fonts下的有所文件. 然后运行script: documentserver-generate-allfonts.sh 然后清理浏览器缓存。

//进入容器(运维的镜像卡塔尔内,删除/usr/share/fonts下除truetype外其他文件和文件夹

$ dockerexec -it 38e27 /bin/bash

[email protected]:/#dir或者ls –al

 

[email protected]:~#cd /usr/share/fonts/

[email protected]:/usr/share/fonts#ls

truetype  X11

//删除文件夹X11

[email protected]:/usr/share/fonts#rm -R dir X11

rm:cannot remove dir: No such file or directory

[email protected]:/usr/share/fonts#ls

truetype

[email protected]:/usr/share/fonts#cd truetype

[email protected]:/usr/share/fonts/truetype#ls –al

[email protected]:/usr/share/fonts/truetype#ls -al

total462392

drwxr-xr-x11 root   root       4096 Feb 19 04:17 .

………………

//删除trutype文件夹下所有文件,除了custome文件夹外

[email protected]:/usr/share/fonts/truetype#rm -R dir *.*

rm:cannot remove dir: No such file or directory

[email protected]:/usr/share/fonts/truetype#rm -R dir *

rm:cannot remove dir: No such file or directory

rm:cannot remove custom: Device or resource busy

[email protected]:/usr/share/fonts/truetype#ls

custom

[email protected]:/usr/share/fonts/truetype#ls -al

total 12

drwxr-xr-x10 root root 4096 Feb 19 10:14 .

drwxr-xr-x  6 root root 4096 Feb 19 10:12 ..

drwxr-xr-x  2 root root 4096 Feb 19 03:48 custom

……

 

[email protected]:/usr/share/fonts/truetype#exit

exit

 

[email protected] ~/winfont

//将近日文件夹C:UsersAdministrator下的winfont文件夹内的字体全体拷贝到容器的公文夹/usr/share/fonts/truetype中

$ tar -cv* | docker exec -i 6df tar x -C /usr/share/fonts/truetype

kaiu.ttf

msjh.ttc

msjhbd.ttc

msjhl.ttc

msyh.ttc

msyh.ttf

msyhbd.ttc

msyhl.ttc

simfang.ttf

simhei.ttf

simkai.ttf

simli.ttf

simsun.ttc

simsunb.ttf

simyou.ttf

……

 

[email protected] ~/winfont

//步向容器内

$ dockerexec -it 6df /bin/bash

[email protected]:/#sudo mkfontscale

[email protected]:/#sudo mkfontdir

[email protected]:/#sudo fc-cache -fv

/usr/share/fonts:caching, new cache contents: 0 fonts, 1 dirs

…………

fc-cache:succeeded

[email protected]:/#exit

exit

//退出容器

[email protected] ~/winfont

$ dockerexec 6df /usr/bin/documentserver-generate-allfonts.sh

GeneratingAllFonts.js, please wait...Done

onlyoffice-documentserver:docservice:stopped

onlyoffice-documentserver:docservice:started

onlyoffice-documentserver:converter:stopped

onlyoffice-documentserver:converter:started

 

假定机重视启后,不要用dockerrun命令,要用dockerstart

具体步骤见图:

下一场在调用onlyoffice协作编辑的页面中,设置"lang": "zh-CN",

[html] view plain copy

  1.                 "editorConfig": {  
  2.                     "callbackUrl": "",  
  3.                     "user": {  
  4.                         "id": "{{.Uid}}",  
  5.                         "name": "{{.Uname}}"  
  6.                     },  
  7.                     "lang": "zh-CN",//"en-US",  
  8.                 },  

详尽代码见

查阅谈论

上边链接里相符介绍了哪些汉化和付出。 笔者用golang的beego框架开...

再键入下边自以为是:

Hadoop动态拉长删减节点datanode及回复

  1. 布局种类情况

主机名,ssh互信,情状变量等

本文略去jdk安装,请将datanode的jdk安装路径与/etc/Hadoop/hadoop-evn.sh中的java_home保持黄金时代致,版本hadoop2.7.5

修改/etc/sysconfig/network

然后实施命令
hostname 主机名
本条时候能够收回一下系统,再重登陆之后就能够了

[[email protected] ~]# hostname
localhost.localdomain
[[email protected] ~]# hostname -i
::1 127.0.0.1
[[email protected] ~]#
[[email protected] ~]# cat /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=slave2
GATEWAY=192.168.48.2
# Oracle-rdbms-server-11gR2-preinstall : Add NOZEROCONF=yes
NOZEROCONF=yes
[[email protected] ~]# hostname slave2
[[email protected] ~]# hostname
slave2
[[email protected] ~]# su - hadoop
Last login: Sat Feb 24 14:25:48 CST 2018 on pts/1
[[email protected] ~]$ su - root

建datanode目录并改全部者

(此处的切实可行路线值,请参见namenode中/usr/hadoop/hadoop-2.7.5/etc/hadoop/hdfs-site.xml,core-site.xml中的dfs.name.dir,dfs.data.dir,dfs.tmp.dir等卡塔尔

Su - root

# mkdir -p /usr/local/hadoop-2.7.5/tmp/dfs/data

# chmod -R 777 /usr/local/hadoop-2.7.5/tmp

# chown -R hadoop:hadoop /usr/local/hadoop-2.7.5

[[email protected] ~]# mkdir -p /usr/local/hadoop-2.7.5/tmp/dfs/data
[[email protected] ~]# chmod -R 777 /usr/local/hadoop-2.7.5/tmp
 [[email protected] ~]# chown -R hadoop:hadoop /usr/local/hadoop-2.7.5
 [[email protected] ~]# pwd
/root
[[email protected] ~]# cd /usr/local/
[[email protected] local]# ll
total 0
drwxr-xr-x. 2 root  root  46 Mar 21  2017 bin
drwxr-xr-x. 2 root  root    6 Jun 10  2014 etc
drwxr-xr-x. 2 root  root    6 Jun 10  2014 games
drwxr-xr-x  3 hadoop hadoop 16 Feb 24 18:18 hadoop-2.7.5
drwxr-xr-x. 2 root  root    6 Jun 10  2014 include
drwxr-xr-x. 2 root  root    6 Jun 10  2014 lib
drwxr-xr-x. 2 root  root    6 Jun 10  2014 lib64
drwxr-xr-x. 2 root  root    6 Jun 10  2014 libexec
drwxr-xr-x. 2 root  root    6 Jun 10  2014 sbin
drwxr-xr-x. 5 root  root  46 Dec 17  2015 share
drwxr-xr-x. 2 root  root    6 Jun 10  2014 src
[[email protected] local]#

ssh互信,即实现 master-->slave2免密码

master:

[[email protected] ~]# cat /etc/hosts

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.48.129    hadoop-master

192.168.48.132    slave1

192.168.48.131    slave2

[[email protected] ~]$ scp /usr/hadoop/.ssh/authorized_keys [email protected]:/usr/hadoop/.ssh

The authenticity of host 'slave2 (192.168.48.131)' can't be established.

ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'slave2,192.168.48.131' (ECDSA) to the list of known hosts.

[email protected]'s password:

authorized_keys       

[[email protected] ~]$ ssh [email protected]

Last login: Sat Feb 24 18:27:33 2018

[[email protected] ~]$

[[email protected] ~]$ exit

logout

Connection to slave2 closed.

[[email protected] ~]$

  1. 修改namenode节点的slave文件,扩张新节点新闻

[[email protected] hadoop]$ pwd

/usr/hadoop/hadoop-2.7.5/etc/hadoop

[[email protected] hadoop]$ vi slaves

slave1

slave2

3. 在namenode节点上,将hadoop-2.7.3复制到新节点上,并在新节点上剔除data和logs目录中的文件

Master

[[email protected] ~]$ scp -R hadoop-2.7.5 [email protected]:/usr/hadoop

Slave2

[[email protected] hadoop-2.7.5]$ ll

total 124

drwxr-xr-x 2 hadoop hadoop  4096 Feb 24 14:29 bin

drwxr-xr-x 3 hadoop hadoop    19 Feb 24 14:30 etc

drwxr-xr-x 2 hadoop hadoop  101 Feb 24 14:30 include

drwxr-xr-x 3 hadoop hadoop    19 Feb 24 14:29 lib

drwxr-xr-x 2 hadoop hadoop  4096 Feb 24 14:29 libexec

-rw-r--r-- 1 hadoop hadoop 86424 Feb 24 18:44 LICENSE.txt

drwxrwxr-x 2 hadoop hadoop  4096 Feb 24 14:30 logs

-rw-r--r-- 1 hadoop hadoop 14978 Feb 24 18:44 NOTICE.txt

-rw-r--r-- 1 hadoop hadoop  1366 Feb 24 18:44 README.txt

drwxr-xr-x 2 hadoop hadoop  4096 Feb 24 14:29 sbin

drwxr-xr-x 4 hadoop hadoop    29 Feb 24 14:30 share

[[email protected] hadoop-2.7.5]$ pwd

/usr/hadoop/hadoop-2.7.5

[[email protected] hadoop-2.7.5]$ rm -R logs/*

  1. 启动新datanode的datanode和nodemanger进程

先认同namenode和近来的datanode中,etc/hoadoop/excludes文件中无待出席的主机,再扩充上面操作

[[email protected] hadoop-2.7.5]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /usr/hadoop/hadoop-2.7.5/logs/hadoop-hadoop-datanode-slave2.out
[[email protected] hadoop-2.7.5]$ sbin/yarn-daemon.sh start nodemanager
starting datanode, logging to /usr/hadoop/hadoop-2.7.5/logs/yarn-hadoop-datanode-slave2.out
[[email protected] hadoop-2.7.5]$
[[email protected] hadoop-2.7.5]$ jps
3897 DataNode
6772 NodeManager
8189 Jps
[[email protected] ~]$

5、在NameNode上刷新节点

[[email protected] ~]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
[[email protected] ~]$sbin/start-balancer.sh

  1. 在namenode查看当前集群意况,

确认信节点已经不足为怪投入

[[email protected] hadoop]$ hdfs dfsadmin -report
Configured Capacity: 58663657472 (54.63 GB)
Present Capacity: 15487176704 (14.42 GB)
DFS Remaining: 15486873600 (14.42 GB)
DFS Used: 303104 (296 KB)
DFS Used%: 0.00%
Under replicated blocks: 5
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0


Live datanodes (2):

Name: 192.168.48.131:50010 (slave2)
Hostname: 183.221.250.11
Decommission Status : Normal
Configured Capacity: 38588669952 (35.94 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 36887191552 (34.35 GB)
DFS Remaining: 1701470208 (1.58 GB)
DFS Used%: 0.00%
DFS Remaining%: 4.41%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Mar 01 19:36:33 PST 2018

Name: 192.168.48.132:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 20074987520 (18.70 GB)
DFS Used: 294912 (288 KB)
Non DFS Used: 6289289216 (5.86 GB)
DFS Remaining: 13785403392 (12.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 68.67%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Mar 01 19:36:35 PST 2018

[[email protected] hadoop]$

7动态删除datanode

7.1配置NameNode的hdfs-site.xml,

适当的数量减小dfs.replication别本数,扩充dfs.hosts.exclude配置

[[email protected] hadoop]$ pwd
/usr/hadoop/hadoop-2.7.5/etc/hadoop
[[email protected] hadoop]$ cat hdfs-site.xml
<configuration>
<property>
      <name>dfs.replication</name>
      <value>3</value>
</property>
  <property>
      <name>dfs.name.dir</name>
      <value>/usr/local/hadoop-2.7.5/tmp/dfs/name</value>
</property>
    <property>
      <name>dfs.data.dir</name>
      <value>/usr/local/hadoop-2.7.5/tmp/dfs/data</value>
    </property>
<property>
    <name>dfs.hosts.exclude</name>
    <value>/usr/hadoop/hadoop-2.7.5/etc/hadoop/excludes</value>
  </property>

</configuration>

7.2在namenode对应路线(/etc/hadoop/卡塔尔下新建excludes文件,

并写入待删除DataNode的ip或域名

[[email protected] hadoop]$ pwd
/usr/hadoop/hadoop-2.7.5/etc/hadoop
[[email protected] hadoop]$ vi excludes
####slave2
192.168.48.131[[email protected] hadoop]$

7.3在NameNode上刷新全部DataNode

hdfs dfsadmin -refreshNodes
sbin/start-balancer.sh

7.4在namenode查看当前集群情况,

确认信节点已经符合规律删除,结果中已无slave2

[[email protected] hadoop]$ hdfs dfsadmin -report

抑或能够在web检查实验分界面(ip:50070卡塔 尔(阿拉伯语:قطر‎上能够洞察到DataNode慢慢变为Dead。

在datanode项,Admin state已经由“In 瑟维斯“变为”Decommissioned“,则代表删除成功

7.5安息已去除的节点相关进程

[[email protected] hadoop-2.7.5]$ jps
9530 Jps
3897 DataNode
6772 NodeManager
[[email protected] hadoop-2.7.5]$ sbin/hadoop-daemon.sh stop datanode
stopping datanode
[[email protected] hadoop-2.7.5]$ sbin/yarn-daemon.sh stop nodemanager
stopping nodemanager
[[email protected] hadoop-2.7.5]$ jps
9657 Jps
[[email protected] hadoop-2.7.5]$

8过来已删除节点

施行7.2 中剔除相关音信,然后4,5,6就可以。

Hadoop2.3-HA高可用集群情形搭建 
Hadoop项目之基于CentOS7的Cloudera 5.10.1(CDH卡塔尔的安装配置 
Hadoop2.7.2集群搭建详细解释(高可用卡塔 尔(英语:State of Qatar) 
选择Ambari来安顿Hadoop集群(搭建内网HDP源卡塔尔国 
Ubuntu 14.04下Hadoop集群安装 
CentOS 6.7安装Hadoop 2.7.2 
Ubuntu 16.04上营造布满式Hadoop-2.7.3集群 
CentOS 7 下 Hadoop 2.6.4 布满式集群意况搭建 
Hadoop2.7.3 斯Parker2.1.0截然布满式集群搭建进度 

1. 布局连串遇到 主机名,ssh互信,情状变量等 本文略去jdk安装,请将datanode的jdk安装路径与/etc/Hadoop/had...

http://blog.csdn.net/maodou95838/article/details/78194830?locationNum=1&fps=1#0-qzone-1-51693-d020d2d2a4e8d1a374a433f596ad1440

就算是局域网,布置的微型机持有稳固ip(下文叫宿主机,切记卡塔 尔(阿拉伯语:قطر‎。Computer能打开cup的设想功效。

——追求简单化的装置就会分享尊贵的网络协作办公条件

极度一直咨询小编,qq504284,微信hotqin999

$ docker exec 38e27 /usr/bin/documentserver-generate-allfonts.sh

[root]# rm -中华V dir dirname删除全数

docker pull onlyoffice/documentserver

onlyoffice-documentserver:converter: started

3.1拉取镜像,在docker中黏贴上面包车型客车通令(黏贴方法:鼠标右键点窗口底部,编辑——黏贴卡塔尔

其三步:运营docker后在里头拉取Onlyoffice Document Server 的Docker镜像

wqy-zenhei.ttc

[root]# rm -f *.ttf

$ tar -cv * | docker exec -i 38e27823ae92tar x -C /usr/share/fonts/truetype/msttcorefonts

先是步:明确计算机展开了cpu设想展开。https://jingyan.baidu.com/article/22fe7ced3b4c003002617f17.html

退出到docker中来(exit命令)

$ docker exec -it 38e27 /bin/bash

条件是win7之上,win10以下。Win10有别于正是设置的docker软件差异而已。

 再进来容器(命令:$ docker exec -it 38e27 /bin/bash卡塔尔国

要求:

Generating AllFonts.js, please wait...Done

因为docker是运作在虚构机中的,3.2节说的别的Computer访谈容器里的documentserver服务,也就是访谈虚构机,让后虚构机再转到docker里的容器。展开安装docker toolbox后变卦的Oracle vm virtualbox,参谋上面链接设置端口转载。

sudo fc-cache -fv (建立字体缓存新闻,也正是让系统认知雅黑卡塔尔国

3.2起步documentserver(在docker中运转刚拉取的镜像后就叫容器了。卡塔 尔(阿拉伯语:قطر‎——何况把documentserver容器中那么些服务映射至宿主机上9000端口,这样映射的目标是别的Computer采访那个宿主机端口9000就也正是访谈这几个宿主机中docker中容器中的documentserver了。可是windows要做端口转载。

root@38e27823ae92:/# dir

root@38e27823ae92:/# find / -name arial.ttf

第五步:windows上的虚构机端口转载

4.1在engineercms文件夹view中打onlyoffice文件夹,张开onlyoffice.tpl,替换里面包车型客车api.js的ip地址为您的局域网宿主机的ip,端口是容器映射端口9000,然后替换别的2个ip也为宿主机ip就可以。

wqy-microhei.ttc

 记录下容器id,这几个前面操作都会用到,不用全记,日常前3位或4位就可以。没记也没提到,后续随即用那几个命令查:

onlyoffice-documentserver:docservice:stopped

图片 1

3.4跻身容器(运维的镜像卡塔尔国内

sudo mkfontdir (制造雅黑字体的fonts.dir文件,它用来支配字体粗斜体发生卡塔 尔(阿拉伯语:قطر‎

用find命令(Linux命令卡塔尔国找到字体所在目录,应该是/usr/share/fonts/truetype/msttcorefonts

3.3翻看运维的容器

msyh.ttf

注:38e27为容器id,那时候一定于在容器内的系统操作,不是在docker里了。比方能够查看文件夹组成,如下,当时都是Linux命令啦:

第四步:运行engineercms

onlyoffice-documentserver:docservice:started

其次步:下载和设置docker toolbox,暗中认可安装即可。下载地址http://get.daocloud.io/,下载扶持旧版windows的docker toolbox,win10就直接下载docker了。

抽离容器到docker中来:exit

把engineercms整个文件夹拷贝到宿主机的d盘,点击里面包车型大巴engineercms.exe就能够了,默许是宿主机80端口,假诺冲突,必要在conf中退换端口。

 sudo mkfontscale (创设雅黑字体的fonts.scale文件,它用来支配字体旋转缩放卡塔尔国

Administrator@604TFALNDKDKJWC MINGW64/c/program files/git/usr/share/fonts

实行如下命令:

wqy-zenhei.ttf

到此已经足以用了。可是,就是要换一下documentserver中的字体,换来文泉驿linux字体。

拉取完毕后运维documentserver

即使陈设到云主机,道理是相似的。

再把字体拷入容器的/usr/share/fonts/truetype/msttcorefonts文件夹中

Docker ps

docker run -i -t -d -p 9000:80 onlyoffice/documentserver

找到这些目录后,就用cd命令风流倜傥薄薄步入到msttcorefonts里,删除它个中的有所字体文件,用上面三令五申删除:

能够用本人办好的镜像导入。见docker load < documentserver.tar命令。

onlyoffice-documentserver:converter: stopped

拷贝字体过去

本文由时时app平台注册网站发布于彩世界网址,转载请注明出处:ONLYOFFICE界面汉化,onlyoffice汉化

关键词: