Skip to main content

Posts

Showing posts from 2016

how to add storage to Hadoop DataNode

Hi All, This is a very simple operation intend to be very confusing as I saw in other posts I read. What you will need to add storage to hadoop is a new disk, mounted on a new volume (directory) After you got a new disk, ask you system administrator to mount him on a new desired directory: [root@RHEL7datanode1 ~]# mkdir -p /hadoop/add_disk/ [root@RHEL7datanode1 ~]# chown -R hdfs:hadoop /hadoop [root@RHEL7datanode1 ~]# su - hdfs now you need to change conf file and add this new folder to the hdfs-site.xml vi /usr/hdp/2.3.0.0-2557/hadoop/conf/hdfs-site.xml     <property>       <name>dfs.datanode.data.dir</name>       <value>/data2/hdp,/data1/hdp, /hadoop/add_disk </value>     </property> Restart datanode: [hdfs@RHEL7datanode1 ~]$ /usr/hdp/2.3.0.0-2557/hadoop/sbin/hadoop-daemon.sh stop datanode stopping datanode [hdfs@RHEL7datanode1 ~]$ /usr/hdp/2.3.0.0-2557/hadoop/sbin/hadoop-daemon.sh startp datanode starting datanode, loggin

tnsping and sqlplus takes long time event though TNSPING response time is great

This time I faced this scenario, One of my client called me and tell me that his application server faces difficulties to create the connection pool against the DB. first I have checked the DB server side and I saw everything is ok I tried to connect from my laptop - again OK from the application server things were getting weird.. once every few tnsping commands or sqlplus command using tnsnames.ora sqlnet.ora the command tooks more than 10 seconds to return to prompt... I did these changes to isolate the problem sqlnet.ora I've changed - SQLNET . AUTHENTICATION_SERVICES = ( NTS ) to: SQLNET . AUTHENTICATION_SERVICES = ( NONE ) and  in the tnsnames.ora - I've used IP instead of the DB hostname. but still - same behavior as above. then I've started to drill down and found these amazing finding: * in the registry key called TNS_ADMIN under LOCAL MACHINE -> SOFTWARE -> ORACLE I found out that the value is pointing to a centralized Oracle client s

KUP-04074: no write access to directory when trying to query EXTERNAL table

Hi All, You may think that in order to query external table you should have only read permission on the Directory Object which sounds very reasonable.... but in some external table the definition include this part of code: ACCESS PARAMETERS         ( records delimited by newline           LOGFILE   "EXTERNAL_FILES" : 'faults.txt'       FIELDS TERMINATED BY ';'             missing FIELD values are null this definition force a write permission on the directory object.  else you will run into this error: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-04074: no write access to directory object EXTERNAL_FILES 29913. 00000 -  "error in executing %s callout" *Cause:    The execution of the specified callout caused an error. *Action:   Examine the error messages take appropriate action this will do the work: GRANT READ, WRITE ON DIRECTORY  EXTERNAL_FILES TO USER;

backup all oracle users and their passwords

if you have a process that run over your DB daily, or any other use cases that you need to recover your users after it you should simply run this query below to generate a simple script using DBMS_METADATA functions and run it over the instance when you desired to: SQL> set head off set pages 0 set long 9999999 spool create_users.sql SELECT DBMS_METADATA.GET_DDL('USER', USERNAME) || '/' DDL FROM DBA_USERS UNION ALL SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', USERNAME) || '/' DDL FROM  DBA_USERS UNION ALL SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', USERNAME) || '/' DDL FROM DBA_USERS UNION ALL SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT', USERNAME) || '/' DDL FROM DBA_USERS; spool off; enjoy :-)

ora-65035 while create pluggable database from another one

or : ORA-21000: error number argument to raise_application_error of -65035 is out of range Cause: this error caused because of an unrecoverable transaction in the source DB. Solution: 1. either stop and start your pluggable source DB 2. identify the open transaction and kill it if you can. use this query to do so: select * from v$transaction t,v$session  s where t.ses_addr = s.saddr;

format output text php using shell_exec function

Hi there if you face this nagging thing that the output of shell_exec command is appering in one long line you must keep reading this super easy solution you will thank me after that let say the output is saved in $output variable <php     $output = shell_exec($cmd);     echo $output;  # will pring a long line with the output    # in order to fix the output to get line breaks  use this code   foreach(preg_split("/((\r?\n)|(\r\n?))/", $output_including_status) as $line){   {          echo "<br/>";          echo $line;    } ?> have fun. support me.

ORA-30653: reject limit reached

when working with EXTERNAL tables you should be aware of LIMIT FLAG REJECT LIMIT 0 which means that any null values in the external file will be rejected and this error will be raised when trying to access this table ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-30653: reject limit reached if you want to see empty fields as NULL values row add this line to your table definition: MISSING FIELD VALUES ARE NULL if you want to ignore empty lines from being showed as rows add these two lines to your table definition: MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS

how to tell if oracle session is still active and running?

First look for you session status: select * from v$session where status = 'ACTIVE'; then check the long operation table to try to estimate the progress pace SELECT ( sofar / totalwork) * 100 as percent , FROM v $ session_longops WHERE sofar <> totalwork If you encounter this wait event : " wait for unread message on broadcast channel"  probably your session is still alive and running

ORA-00059: maximum number of DB_FILES exceeded

When trying to copy PDB using SQL> create pluggable database TARGET_PDB from SOURCE_PDB; you may face this error the solution is very simple, you need to increase the parmarter  DB_FILES SQL> show parameter db_files; NAME                                 TYPE        VALUE ------------------------------------ ----------- ------------------------------ db_files                             integer     200 key points: when PDB is closed - the files still exists but you won't see them when counting from  CDB_DATA_FILES view. to  drop a pluggabe database including his datafiles use this command: SQL> drop pluggable database PDB_NAME including datafiles;

SQL Native Error Code '2006' MySQL server has gone away

This error means that your application (client) has lost the connection to mysql server when a session is exceeding the time out limit while idle you will hit this error or when there are network issues and packet loss. (you can define the packet size that above it this error is raised) so - go you you cnf file (the standard is my.cnf) unless you've changed it look for these parameters and use appropriate values per your needs. wait-timeout                   = 31536000 max-allowed-packet             = 32M if your mysql server is already running and you don't want to restart it, these parameters can be changed on the fly (dynamic parameters) you should use SET GLOBAL/SESSION command to change them full list of dynamic parameters: http://dev.mysql.com/doc/refman/5.7/en/dynamic-system-variables.html Enjoy

ORA-27104: system-defined limits for shared memory was misconfigured

I faced this error while trying to restore & recover of a PDB (pluggable database) part of the log file and the solution is described here below: log: initialization parameters used for automatic instance: db_name=CDB db_unique_name=gbux_pitr_PDB1_CDB compatible=12.1.0.2.0 db_block_size=8192 db_files=200 diagnostic_dest=/oracle/app/oracle _system_trig_enabled=FALSE sga_target=1888M processes=200 db_create_file_dest=/oracle/auxilary log_archive_dest_1='location=/oracle/auxilary' enable_pluggable_database=true _clone_one_pdb_recovery=true #No auxiliary parameter file used starting up automatic instance CDB RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of recover command at 07/31/2016 16:22:20 RMAN-04014: startup failed: ORA-27104: system-defined limits for shared

Setup pluggable database (PDB) as a recovery catalog for all other pluggable DBs in the container

In order to make a recovery catalog using a pluggable database as part of your container database you should follow these instructions special notes * PDB can act as a recovery catalog * Recovery catalog is not supported when connected to pluggable database * You must register the whole CDB agains the PDB catalog Creating pluggable database for RMAN catalog bash-4.2$ echo $ORACLE_SID CDB bash-4.2$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 28 15:38:17 2016 Copyright (c) 1982, 2014, Oracle.  All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> create pluggable database rman admin user rman identified by rman; SQL> alter session set container = rman; Session altered. SQL> alter database open; Database altered. Preparing specific tablespace  SQL> conn sys@RMAN as sysdba

Missing command :fuser Prerequisite check "CheckSystemCommandAvailable" failed

When trying to apply patch you may face this error the solution is described below. ERROR: bash-4.2$ ../opatch apply Oracle Interim Patch Installer version 12.2.0.1.5 Copyright (c) 2016, Oracle Corporation.  All rights reserved. Oracle Home       : /oracle/app/oracle/product/12.1.0 Central Inventory : /oracle/app/oracle/oraInventory    from           : /oracle/app/oracle/product/12.1.0/oraInst.loc OPatch version    : 12.2.0.1.5 OUI version       : 12.1.0.2.0 Log file location : /oracle/app/oracle/product/12.1.0/cfgtoollogs/opatch/opatch2016-07-27_16-27-44PM_1.log Verifying environment and performing prerequisite checks... Prerequisite check "CheckSystemCommandAvailable" failed. The details are: Missing command :fuser Prerequisite check "CheckSystemCommandAvailable" failed. The details are: Missing command :fuser Prerequisite check "CheckSystemCommandAvailable" failed. The details are: Missing command :fuser Prerequisite check &qu

X11 Forwarding - DISPLAY not SET when trying to run xclock

in order to use PuTTY with X11 FW feature you will need to follow these instructions: 1. in PuTTY configuration -> go to Connection settings -> SSH -> X11     check the Enable X11 forwarding 2. check /etc/ssh/sshd_config  that X11 forwarfing flag set to yes     if not - change to yes and restart sshd service [root]# service sshd restart 3. open your server terminal using putty 4. check DISPLAY parameter by typing  ##echo $DISPLAY     you should get:  localhost:10.0 (or something similar) 5. open Xming on your local machine (your PC) 6. type xclock command and you should see it common problems that you can face during your xclock configuration experience and solutions: if xclock is not found in your Linux server -> install xorgs-X11-apps rpm: [root]#  yum install xorg-x11-apps xclock command gives Can't open display output [root]# xclock

how index become unusable?

There are few reasons that make your indexes become unusable 1. the unusable command statement  - ALTER INDEX idx UNUSABLE; 2. direct path writes to the index's table - such as using sqlldr utility using direct=y flag     In this case you can create multiple PK values and until you will clear the duplication the PK will be in unusable state.     * using /+ APPEND +/ hint behaves differently from sqlldr.      Although it's direct path as well Oracle will give unique constraints errors. 3. reorganizing table extents by using move command - ALTER TABLE tbl MOVE; take these operations in your consideration when developing your application since you will need to spend some time and resources in order to fix the indexes (rebuild for example) during your process. I'm not saying direct path is bad, just be aware of the consequences. :-) Good Luck,

Jenkins backup configuration and jobs

Hi there Backup up Jenkins configuration is very simple as you will see right now. under your $JENKINS_HOME you have several important folders. (Assuming your Jenkins installed from RPM your JENKINS_HOME=/var/lib/jenkins) one of these folders is jobs if you need to backup only the configuration of your jobs without any other information such as previous builds and artifacts you can backup only config.xml file per each job. in case you have special plugins it's better for you to backup $JENKINS_HOME/plugins folder as well. custom nodes information can be found here : $JENKINS_HOME/nodes these are necessary for manual backup strategy. in case you will need to restore Jenkins you will need to install Jenkins from scratch and extract the backed up files under the correct folder structure. better way is to look for backup plugin to integrate from your Jenkins console. for example: https://wiki.jenkins-ci.org/display/JENKINS/Backup+Plugin enjoy

Claim back LOBSEGMENT space

After deleteing rows from a table with LOB column you will still see same number of extents and size capacity. in order to claim back the space of the LOB SEGMENT you will need to run this command: ALTER TABLE your_table MODIFY LOB (your lob column) (SHRINK SPACE); validation of reclaiming the space- Identify the segment name using this query: select * from DBA_LOBS where TABLE_NAME = 'your table' and COLUMN_NAME = 'your column'; Validate the size reduction by quering DBA_EXTENTS table where SEGMENT_NAME is the SEGMENT_NAME from above query. Good luck

[01000][unixODBC][Dr​iver Manager]Can't open lib libsq​ora.so.11.1 : file not found

Hi assuming you have installed correctly these components: ORACLE DB (including instance and listener) ODBC driver and you are facing this problem: [root@linuxRHEL6 bin]# isql -v simple [01000][unixODBC][Driver Manager]Can't open lib '/usr/lib/oracle/11.2/ client64/lib/libsqora.so.11.1' : file not found   you should check the following [root@linuxRHEL6 bin]# ldd /usr/lib/oracle/11.2/client64/ lib/libsqora.so.11.1 ldd: warning: you do not have execution permission for `/usr/lib/oracle/11.2/ client64/lib/libsqora.so.11.1'         linux-vdso.so.1 =>  (0x00007fffa11d1000)         libdl.so.2 => /lib64/libdl.so.2 (0x00007f5c2585e000)         libm.so.6 => /lib64/libm.so.6 (0x00007f5c255d9000)         libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5c253bc000)         libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f5c251a3000)         libclntsh.so.11.1 => /usr/lib/oracle/11.2/client64/ lib/libclntsh.so.11.1 (0x00007f5c2283300