Kerberos Security -single sign-on authentication protocol that uses the concept of “tickets” to provide identity
Main Components =3
- Client
- Server
- KDC - Key Distribution center
1 - A user login to the client machine. The client does a plaintext request (TGT). The message contains: (ID of the user; ID of the requested service (TGT); The Client Net address (IP); validation lifetime)
2 - The Authentication Server will check if the user exists in the KDC database.
If the user is found, it will randomly generate a key (session key) for use between the user and the Ticket Granting Server (TGS).
The Authentication Server will then send two messages back to the client:
- One is encrypted with the TGS secret key.
- One is encrypted with the Client secret key.
NOTE:
The TGS Session Key is the shared key between the client and the TGS.
The Client secret key is the hash of the user credentials (username+password).
3 - The client decrypts the key and can logon, caching it locally. It also stores the encrypted TGT in his cache.
When accessing a network resource, the client sends a request to the TGS with the resource name he wants to access, the user ID/timestamp and the cached TGT.
4 - The TGS decrypts the user information and provides a service ticket and a service session key for accessing the service and sends it back to the Client once encrypted.
5 - The client sends the request to the server (encrypted with the service ticket and the session-key)
6 - The server decrypts the request and if its genuine, it provides service access.
Kerberos Terminology
- Credential cache or ticket file = Keytab file which is cached at the client-side (limited period key)
- KDC is basically your AD -active directory
- kinit - cmd user to initiate Kerberos process
Lab
This user doesn't have TGT -so failed to login
How to do create TGT
- This user should present in AD-Active Directory.
- Go to AD service account and create new user called - hdfs and initiate TGT process for this user.
- Now create a service ticket for this user -cmd for initiating the process
key initialization -give password created at AD
Now check duration and service principle credentials (listing of all particular keys for this user)
Now Hdfs user is able to access the service
After expiry date this user has to initiate kinit again to access the cluster
----------------------------
************* hadoop security is nothing but AAA (Aug 13 2020)
auhtentication---kerbroes--authentication of users and services (now user is trying to access the data)
authorization--ACLs (sentry/ranger) --(user trying to access data)---whether the user having proper read/write permissons (hive/hbase hdfs --sentry level)
audiing--cloud era navogator --
data need to tranfer source to hadoop (services cluster)
if data flow between source to destination how can you make sure it safe (encrytion and transit)---acheived using TLS (transport layer session)and SSL (secure socket layer)
encrytion and REST---acived by HDFS cryto zones (another level of security provided by hdfs clsuter)
ACLs,kerberos,ranger(sentry),TLS/SSL,crypto zones
layers of hadoop security
1.network level security(firewall--knox gateway for that)--perimeter level security
2.Os security ---file level permissions
example : there is a file with RWX permission (owner,group,others) file name: hadoop.txt (user:san he belong to admin grp rest all the can read the data
dev team, admin, qa, macine leaning, anlaytics team (san want to do modity file)--(here ACLs cmome into picture)help us to define permissions on particular file or directory
without changing default permissions he can give additonl permissions with help of ACLs
rwxr--r-- (by default permissions)--user ,group,other (one way is chown root but it will distract by default permissions)
ACLs--applied to data to restrict access to data to approved entities. (add addtional permissions with out distracting default permissions)
set perperty (hdfs-site.xml) and restring name node
ACLs-----add a perperty in hdfs-site.xml (dfs.namenode.enabled)(true) ----restrt hdfs service--after we can enable ACLs--
setfacl---to set ACLs for particular dir or file (hdfs dfs -setfacl -m group:execs:r--/sales-data
getfacl--list the acls on a particular file (hdfs dfs getfacl /sales)--list all ACLs (look for other options)
both help you to enable /disable ACLs
***********
*********** ALCs and kerberos (Aug 14 2020)
ACLs ?--to set user,group,dir level access (permissions to rwx on a particular file or dir (hwich is used to authorization purpose)
Kerberos? use? why it is integrated with hadoop cluster?
used for authentication
why used for hadoop clsuter?
normlly hadoop cluster access--n/w admin,db admin,hadoop admin,support people--connet to linux box
these users can run linux and hadoop cmds (like ls / hdfs dfs -ls)
some of user mistakrly run (hdfs dfs -m -r -f -skiptrash (by mistake n/w admin execute this cmd by going history)so we need to restrct hadoop access to unathenticatied users
need to restrct usage of hadoop cmds to a non -haddop admin users (only valid users should run hadoop cmd)
userid/pwd ---will be connected with AD
for any envroment user and group mgmt tools will be there (AD taken care by windows admin_(AD LDAP PAM free IPA-------->
when user joined in a comapny he will get user id and pwd using this
iser need to add a particular group for a set of servers,now he loged to one server with his user.pwd--now he can make use of all cmds(but how can we restrict usege of hadoop cmd to non-hadoop user)
to avid useage of hadoop cmds----we are installing utility called kerberos pkgs(krb5 workstation/server/libraries)
Haddop security ---installtion ---once setup kerberos---inside kerberos you will be providign identity (i.e principle)to user that is UPN --user priciple name (for seevices it is SPN service principle name)
first usier login linux server---now he want to run hadoop cmds--he need to raise a ticket
once user authenticated by kerberos ---it will provide a tickt then he can run hadoop cmds
**** kerberos types? 2 types--network authentication protocals developed by MIT--used to athenticate user and services--sytem for authentocation access for ditributed services
o/p of hadoop cmd will be same in any of the servers-- like if you run (hdfs dfs -ls /)
MIT--
krb5 server--it will be having KDC (all details releted to user/service principle name and tickets)
kdc we need to install in master server or we can have dedicated VM
***** architecture
assume you have installed KDC--inside KDC (AS TGS -athentication server(athentication of users) and ticket granting server(providing ticket))
user generate a ticket using----kinit (with user principle name) cmd--request will goto KDC--now AS need to check valid user or not (if valid user--it will reply with TGT(sybol for valid user)---user taje TGT and goto TGS----it will reply with TGS---then he can goto hadoop server and access the service)
if you see error like --GSS exception blindly you can confirm it is Kerberos issue
1.kinit <user principle name>
pwd:XXXX-------->KDC (DB for service/user principle names)---AS (validate user credentials and provide TGT)--now with TGT user goto --- TGS---now he can access hadoop services
kerberos credentials are sepertae compared to server credentials
if youu are using MIT kerberos you need to manulay crete user principle names
if you RE USING AD kersros then user principle names create automatically (realtime will have AD integration of kerberos)
Any person wnat to run haoop cmd--he need to have kerberos ticket--but no nedd to run knit cmd every time--life time of kerberos tickt 7 days by deafult--you need to renew every 24 hours--after 24 hours again you need to run kinit
knit--raise or generate ticket
klist--to check the validty of a tickt
kdestroy--cmd to remove the validity
goto KDC server---run cmd ---kadmin.local
syntax of UPN user principle name ?------username@domainanme--like--santosh@tcs.com
syteax of SPNservice principle name like -----servicename/(FQDN of the server)@domainname (no need to generate it--it will genete automatically)
example: dn1.tcs.com,dn2.tcs.com....
example of SPN --->hdfs/compute1.tcs.com@tcs.com ---------->hdfs/compute2.tcs.com@tcs.com
where does all configs of kerberos stored? --------KRB5.conf --file and part of file ---/etc/krb5.conf (this file will be same acroos all the servres in the cluster)
How does user geranting tkt? kinit with UPN---when ever he dows he need to type pwd--so keytab comes into pictuire---it is a file which haf both prinviple name and pwd encryted---the extexion of keytab file is filename.keyatab (santosh.keytab)-----to avoid reset pwd scnarios --crate keytab file and give it to user
how to crate --keytab file?(like password)--------xst -kte<upn><keytab> --to geneate a keytab you need to login to KDC server
klist--cmd to read keytab file ---klist -kte </path of the keytab>---to read keytab
ktutil---list all kettab file
if you run kinit <upn>/keytab---it will not ask password
realm or domain name-----authentication domain(every cltsr having domain name)---domain name or realm both are same
********* practical of ACL,Kerberos,ranger
********* prcticle of ACL,kerberos,ranger ----aug 15
by default ACls are disabled ---we need to enable config parameters in cloud era UI and ---restasrt cluster
add users for checking
Add user in linuux --------useradd san useradd sat
add griup -------groupadd admin -- groupadd dev ----groupadd qa
add user to group -------add sanjeev user to admin group----------usermod -a -G admin sanusermod -a -G admin,dev,qa san (we added in all groups)
useradd -G admin sat
id -a (belongs to admin group
# su kumar (root user)
#exit
set password to user #passwd san (change password for user)
#gpaawd -d kumar dev (remove user kumar from dev)----------------------------all things taken care by linux team
#linux (user will have entire hadoop en)
#vi /etc/passwd (get all users info)
#useradd -G admin satish
#cat /etc/passwd (all userd etils saved uner this)
#cat /etc/gruop (all group info)
goto HHDFS NN health
#su hdfs (goto hdfs user)
#hdfs dfs -ls /
#exit
#su santosh#hdfs dfs -ls /hbase (no such file/ dir---no access)
#exit
#su hdfs
#hdfs dfs -getfacl /habse (lokking for ACls for Habase directory)-----------list all users
#hdfs dfs -setfacl -R -m user:santosh:rwx /hbase (giving permissions to santosh to entire habse directory) -m, =modify ----set the permisiions
#hdfs dfs -getfacl /habse (santosh will show having permission)
exit and #su santosh
#hdfs dfs -ls /hbase (now he can see all dirs)
set ACls to griup #su hdfs
#hdfs dfs -setfacl -R -m group:admin:r-- /hbase (now admin group only having read permission)
remove ACls to santosh #hdfs dfs -setfacl -x group:admin /hbase
multiple ACls #hdfs dfs -m -setfacl user:kumar:rwx,user:arun:r,group:qa:r-x,group:dev:rwx /hbase (spcifying hbse directory)
#hdfs dfs -ls / (if you see + ACLs are enabled) now use can see ACls for particular directory
should not provide wrong permissiions
*********** Kerberos
authentocation of users and services
KDC (AS and TGS) ====KDC=DB to store kerberos data
kerberos ticket max validity 7 days and renew time is 24 hours
KDC port number 88
user raise ticket with #kinit #klist #kdestroy #kadmin.local
keytab ===UPN + encrypted password
#kinit /path of keytab <upn>
to read keytab file #klist -kte <path of keytab>
to downlod keytab file -----xst -kt <upn>
********** hoow to install kerberos
1.fisrt install KDC server
KRD5 server (only one server i.e master, wirkstation,libraries (all servers i.e part of the cluster)
pre-requistes
1.you need to have same java version across all servers
unlinited ploicy JAR file (doenload it from oracle site)
2. amke sure you have configured right encryptio metod that kerberos need to be used for generatinf keytab (/etc/krrb.conf ==all kerberos configs are stored in this file)
##### How to set up kerberos
goto master server
download jar files #jave -version (wget http://oracle.com....../jrc-policy-8 (down this file and copy using winscp)
login to linux servr #ll
#cp /user/java/jdb1.8.0../jre/lib/security (overide or replace files over there
#cd /root
#scp -r dadsffgga/ root@192.16/122:/user/java/jdb1.8.0../jre/lib/security (copy this to 3 compute servers)
mobaxtrem download--open servers here
cross verify these files are present or not # ls -l /user/java/jdb1.8.0../jre/lib/security
****** installing KDC in one of master server
#yum -y install krb5................ (first it will goto repos directory and it will read all pkgs) that's why move them
#mv * /etc/yum.repos.d
#yum -y install krb5..........krv5 -wrkstation (its will take some time to execute)
master #change realm name /etc/krc5.conf (change damoin name and change enrytion file
#su hdfs
# hdfs dfs -ls /
#now user executing hdfs cmds with out having any ticket (not ebaled kerberos)
#sed -i 's/example/TCS/ /etc/krb5.conf
#hostname
#add enryption tpes (refer document)
#kadmin.local (cmd to enter into KDC server)
#list principles
#exit
addprinc cloudera -scm@hadoop.com (addmin principle)
addpric root/admin@hadoop.com 9admin with this domaninname having capability to craete principles)
listprincs
#service krb5 restart
#service kadmn start
*** now goto CM UI ---open ur cluster---click on admintraion ----seucrity---ebalbe kerberos--------MIT KDC (select) but in realtime it is AD---select all principles---once restart clsyter ---your kerberos will be ebaled
#kadmin.loacl
#xst --------
#klist -kte cms.keytab(rean keytab file)
change the ownership and permisiions of that principle
No comments:
Post a Comment