Forum Discussion
8 Replies
- uyalamanchiFormer Employee
Hello @PSAmmirata -
Thank you for your question about configuration of Kerberos setup with Snaplogic Groundplex. The steps listed are sufficient to configure groundplex to work with Kerberized Hadoop clusters. Were you able to follow steps and configure the groundplex? Do you see any issues?
- PSAmmirataEmployee
Thank you. I haven’t followed the steps yet. I was just checking on the steps before I start.
- uyalamanchiFormer Employee
Excellent! Please let us know if you have any trouble in configuring.
- PSAmmirataEmployee
We’re using RHEL7. From what I’ve read the krb5-auth-diaglog package is depracated in RHEL7.
- PSAmmirataEmployee
Step 5 says to “Generate the keytab file for the kerberos user.” Does anyone have details on how to do this?
- bgilesFormer Employee
Creating a keytab file is straightforward.
If you are creating a keytab file for a user with a password you should use the ‘kutil’ program.
$ ktuil
ktutil: add_entry -password -p principal -k knvo -e enctype
(enter password)
ktutil: write_kt keytabfile
ktutil quitwhere
- principal is your principal, e.g., bob@example.com or bob/hdfs@example.com for a more restricted principal
- knvo is the key version number. 1 should be fine.
- enctype is the encryption type. This is typically something like aes128-cts-hmac-sha1-96, des3-cbc-sha1, or arcfour-hmac. You should check with your system administrator to get the precise encryption types required. You can call this line multiple times, once for each encryption type.
- keytabfile is your keytab file. It traditionally ends with the .keytab extension.
You can verify the new file with ‘klist -kt keytabfile’.
If you are creating a keytab file for a server you must use the ‘kadmin’ program.
If the server principal does not exist yet:
$ kadmin
kadmin: add_principal principal
kadmin: ktadd -k keytabfile principalIf the server principal already exists:
$ kadmin
kdamin: ktadd -k keytabfile -norandkey principalwhere principal is something like “hdfs/172.3.1.7@MYORG.EXAMPLE.COM”.
- PSAmmirataEmployee
Step 6/7 refers to a Hadoop (HDFS) configuration directory. We’re using Cloudera and our Cloudera admin said we don’t have a config directory. Can anyone provide details on what directory this is?
- PSAmmirataEmployee
We did get this working. Here are some of my notes that may be useful.
- We use EMC Isilon storage and the Hadoop configuration details were contained in the Isilon client configuration file.
- We used Cloudera Hive JDBC driver 2.5.19 and needed to specify all of the JAR files extracted from the ZIP file archive.