Friday, February 8, 2013

Utility and Setup for Testing JDBC Connections

This is a Windows based solution.  Sorry Linux fans.  The utility is available in a Linux format, so I would assume this could easily be ported to Linux.

This was used for testing Oracle connections, but the utility should support any standard jdbc connection.

1.  Ensure java is installed and in your PATH.

2.  Download Jisql from http://www.xigole.com/software/jisql/jisql.jsp.  Note the names in the uncompressed LIB directory.

3.  Ensure you have the Oracle JDBC drivers installed and note the path to the jar files.

4.  Now, you should be ready for some simple testing.  A simple batch file like this will do the trick.
    @ECHO OFF

    :LOOPIT

    echo Query instance name from db

    date /t >> logfile.out

    time /t >> logfile.out

    echo First attempt using SID >> logfile.out

    java -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql -user system -password XXXXXXX
         -driveroraclethin -cstring jdbc:oracle:thin:@address-scan:1521:sidname-c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out

    echo Second attempt using SERVICE_NAME >> z:\mike\runit.out

    java -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql -user system -password XXXXXXX
    -driver oraclethin -cstring jdbc:oracle:thin:@//address-scan:1521/servicname.yourcompany.com -c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out
   
    choice /n /t 10 /D Y /M "pausing 10 seconds"

    goto LOOPIT

5.  You can alternatively use TNS entries for testing the connections.  This was the best way I found to test a more complicated setup, such as load balanced, fault tolerant connection.  Something like this: <em>(note - I made a copy of the tnsnames.ora file to c:\temp then trimmed it down to just have the two entries I wanted to test)</em>
    @ECHO OFF

    :LOOPIT

    echo Query instance name from db

    date /t >> logfile.out

    time /t >> logfile.out

    echo First attempt using TNSENTRY>> logfile.out

    java -Doracle.net.tns_admin="c:/temp/" -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql
    -user system -password XXXXXX -driver oraclethin -cstring jdbc:oracle:thin:@TNSENTRY-c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out

    echo Second attempt using TNSENTRY >> logfile.out

    java -Doracle.net.tns_admin="c:/temp/" -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql
    -user system -password XXXXXX -driver oraclethin -cstring jdbc:oracle:thin:@WAREHOUSE3 -c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out
   
    choice /n /t 10 /D Y /M "pausing 10 seconds"

    goto LOOPIT

Thursday, February 7, 2013

Commit Strategy for Large Transactions

Let's face it.  Every DBA has come across a need to update a LOT of records.  The problem usually is that a mass update can take forever, blow out your redo logs, lock out other access, etc.  Here is the solution.  This was written and used in Oracle, but I'm thinking could easily be modified to work with any ANSI compliant relational database.

DECLARE

CURSOR cur IS
   select * from owner.table_temp;
                 
ct NUMBER(5) := 0;
group_size NUMBER(5) := 5000;

BEGIN
    FOR srmvt in cur LOOP
        INSERT INTO owner.table   (<columns>)
            VALUES  (srmvt.<columns>);
       ct := ct + 1;
       IF ct >= group_size THEN
           commit;
           ct := 0;
       END IF;
   END LOOP;
   commit;
END;

I have also used this for updates as well.

DECLARE
CURSOR cur2 IS  
   select * from yourcompany.yourtable;
                
ct2 NUMBER(5) := 0;
group_size NUMBER(5) := 5000;
BEGIN
   FOR srmvt2 in cur2 LOOP
        update yourcompany.yourtable set process_flag = 'SUCCESS'
        where transactionid = srmvt2.transactionid;
       ct2 := ct2 + 1;
       IF ct2 >= group_size THEN
           commit;
           ct2 := 0;
       END IF;
   END LOOP;
   commit;
END;
/

Exclusion List Script

This may not be of interest to most of my blog readers, and I apoligize for that. However, after struggling with a good way to accomplish what I needed to for days, and now having come up with a somewhat good solution, I wanted to publish it out here so I don't lose it. No sense reinventing the wheel, right?

My problem was I needed a good way to track an exclusion list. Normally I'd dump it into a table, but this particular list needed to be used on dozens of databases. I didn't want to maintain multiple copies of the same data - that would be a nightmare. And I didn't want to create dozens of database links - I don't really like them anyway. I thought about hard coding the list in my script, but that seemed kind of ugly. And the list needs to be sql ready, which means each element needs to be enclosed in single quotes and be comma delimited. Let me tell you, bash was not crazy about the quotes. At one point, I had the list hard coded in my script, and had escaped the quotes. Not pretty. It looked something like this:
EXCEPTIONLIST="\(\'one\'\,\'two\'\,\'three\'\)"

Bare in mind the end result for the sql query just needs to be:
('one','two','three')

Option one. I created a table in my central database. I then queried it one time, put it in a variable, then used that for the queries to all the other databases. The bash code looks like this:
EXCLUDED_USERS=`sqlplus -s " / as sysdba"  &lt;
set pages 9999
set lines 132
set trim on
set trims on
set echo off
set recsep off
set head off
set feedback off
set newpage none
select username from owner.table_name;
exit;
EOF`

echo $EXCLUDED_USERS
# Init the variable, put on the left paren.
DELIMITED_LIST=`echo -ne '('`
for USER in $EXCLUDED_USERS
do
# reconstruct the list, but add in the commas and single quotes.
DELIMITED_LIST=`echo $DELIMITED_LIST"'"$USER"',"`
done
# Remove the last comma we really did not need.
DELIMITED_LIST=$(echo ${DELIMITED_LIST%\,})
# Add in the right paren.
DELIMITED_LIST=`echo -ne $DELIMITED_LIST")"`

Option two. Just for grins, I wanted to see if it would work with a space delimited variable list in the script. It looks like this. I ripped out the actual sensitive data.....

# Exclude listing. Put into multiple vars for readability.
# If you add a line, make sure you add it to the line that puts them all together.
# One space between entries please.
EXCLUDED_USERS1="blah blah2"
EXCLUDED_USERS2="blah3 blah4"
EXCLUDED_USERS=$EXCLUDED_USERS1$EXCLUDED_USERS2$EXCLUDED_USERS3$EXCLUDED_USERS4$EXCLUDED_USERS5

# Make sure we are set to a space delimiter
IFS=" "
# Init the variable, put on the left paren.
DELIMITED_LIST=`echo -ne '('`
for USER in $EXCLUDED_USERS
do
# reconstruct the list, but add in the commas and single quotes.
DELIMITED_LIST=`echo $DELIMITED_LIST"'"$USER"',"`
done
# Remove the last comma we really did not need.
DELIMITED_LIST=$(echo ${DELIMITED_LIST%\,})
# Add in the right paren.
DELIMITED_LIST=`echo -ne $DELIMITED_LIST")"`
EXCLUDED_USERS=$DELIMITED_LIST

The basic principle is the same either way. Capture the list into an array, then walk through the array, putting the quotes, commas and parentheses back in. Then it's good to go for the actual query:

USERLIST=`sqlplus -s "/as sysdba"   &lt;
set pages 9999
set lines 132
set trim on
set trims on
set echo off
set recsep off
set head off
set feedback off
set newpage none
select username
from dba_users
where account_status = 'OPEN'
and username not in
(select distinct owner
from dba_objects)
and username not in $EXCLUDED_USERS
order by username
/
exit
EOF`

And that is more than you ever wanted to know about working in IT.......

Sql Server status sql script

Yeah, I know.  Another boring work post.  I have to save these things somewhere!

/* The Sql Server mail procedure does not like single quotes or #TEMP tables.

To get around this, we use some tables but drop them when we are done.
*/

use master
go

/* Get the error messages, agent.
*/

CREATE TABLE Errors (LogDate datetime, ErrorLevel int, vchMessage varchar(255))
CREATE INDEX idx_msg ON Errors(LogDate, vchMessage)
INSERT Errors EXEC xp_readerrorlog 0, 2
go

/* Get the error messages, server.
*/
CREATE TABLE Errors2 (LogDate datetime, ProcessInfo varchar(255), vchMessage text)
CREATE INDEX idx_msg2 ON Errors2(LogDate)
INSERT Errors2 EXEC xp_readerrorlog 0, 1
go


SELECT B.name as Database_Name, ISNULL(STR(ABS(DATEDIFF(day, GetDate(),
MAX(Backup_finish_date)))), 'NEVER') as DaysSinceLastBackup,
ISNULL(Convert(char(10), MAX(backup_finish_date), 101), 'NEVER') as LastBackupDate
into BackupResults
FROM master.dbo.sysdatabases B LEFT OUTER JOIN msdb.dbo.backupset A
ON A.database_name = B.name AND A.type = 'D' GROUP BY B.Name ORDER BY B.name
go

/* We really don't need to see the backup messages that get written to the log.
*/
delete from Errors
WHERE vchMessage LIKE '%Log backed up%' or
vchMessage LIKE '%.TRN%' or vchMessage LIKE '%Database backed up%' or
vchMessage LIKE '%.BAK%' or vchMessage LIKE '%Run the RECONFIGURE%'
go

delete from Errors2
WHERE vchMessage LIKE '%Log backed up%' or
vchMessage LIKE '%.TRN%' or vchMessage LIKE '%Database backed up%' or
vchMessage LIKE '%.BAK%' or vchMessage LIKE '%Run the RECONFIGURE%'
go

/* Move to target database, the one with filegroups that tend to fill up.
Create the temp table back in master, though. That is where the others are.
*/
use target
go

SELECT
cast(b.groupname as varchar(30)) [groupname],
Name,
[Filename],
CONVERT (Decimal(15,2),ROUND(a.Size/128.000,2))
[CurrentAllocatedMBSpace],
CONVERT (Decimal(15,2),
ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2))
AS [SpaceUsedMB],
CONVERT (Decimal(15,2),
ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2))
AS [FreeSpaceMB]
into master..FileGroupSpace
FROM dbo.sysfiles a (NOLOCK)
JOIN sysfilegroups b (NOLOCK) ON a.groupid = b.groupid
ORDER BY b.groupname
go

use master
go

EXEC msdb.dbo.sp_send_dbmail
@recipients=N'melquist@yourcompany.com',
@body='Status report is attached',
@body_format='TEXT',
@subject ='SQL Server Status Report',
@profile_name ='yourcompany',
@query ='EXEC master..xp_fixeddrives;
select * from Errors;
select LogDate, cast (ProcessInfo as varchar(10)) [ProcessInfo], cast(vchMessage as varchar(150)) [Message] from Errors2;
select * from BackupResults;
SELECT name "Failed Jobs" FROM msdb.dbo.sysjobs A, msdb.dbo.sysjobservers B WHERE A.job_id = B.job_id AND B.last_run_outcome = 0;
SELECT name "Disabled Jobs" FROM msdb.dbo.sysjobs WHERE enabled = 0 ORDER BY name;
select groupname, sum(CurrentAllocatedMBSpace) [Currently Allocated Space (MB)], sum(SpaceUsedMB) [Currently Used Space (MB)], sum(FreeSpaceMB) [Current Free Space (MB)]
from FileGroupSpace
group by groupname',
@attach_query_result_as_file = 1,
@query_attachment_filename ='QueryResults.txt'
go

Drop table Errors
Drop table Errors2
Drop table BackupResults
Drop table FileGroupSpace
go

Creating a ward directory

Again, another "exciting" post, which is pretty much just Mike making notes so he can retrace his steps someday if needed.

1.  Log into MLS.  Export the Membership data.  If you need help with this, let me know.

2.  Transfer the csv file to your database server.  Well, in my case, I'm running MySQL on my local desktop.

3.  Log into MySQL!

4.  create database MLS;

5.  create table MLS.Membership
(IndivID int(5) not null,
FullName varchar(50) not null,
PreferredName varchar(50) not null,
HofHID int(5) not null,
HHPosition varchar(50) not null,
HHOrder int(2) not null,
HouseholdPhone varchar(20) null,
IndividualPhone varchar(20) null,
HouseholdEmail varchar (50) null,
IndividualEmail varchar(50) null,
Street1 varchar(100) null,
Street2 varchar(100) null,
DP varchar(20) null,
City varchar(20) null,
Postal varchar(10) null,
State varchar(25) null,
Country varchar(25) null,
2Street1 varchar(100) null,
2Street2 varchar(100) null,
2DP varchar(20) null,
2City varchar(20) null,
2Zip varchar(10) null,
2State varchar(25) null,
2Country varchar(125) null,
WardGeoCode int(10) null,
StakeGeoCode int(10) null,
Sex char(1) not null,
Birth date not null,
Baptized date null,
Confirmed date null,
Endowed date null,
RecExp date null,
Priesthood varchar(20) null,
Mission char(3) null,
Married char(10) null,
SpouseMember char(3) null,
SealedtoSpouse char(3) null,
SealedtoPrior char(3) null);

6.  load data local infile 'C:\\Users\\melquist\\Dropbox\\Mike\\church\\Membership.csv' into table MLS.Membership fields terminated by ','
enclosed by '"'
lines terminated by '\n'
(IndivID,FullName,PreferredName,HofHID,HHPosition,HHOrder,HouseholdPhone,IndividualPhone,HouseholdEmail,IndividualEmail,Street1,Street2,DP,City,Postal,State,Country,2Street1,2Street2,2DP,2City,2Zip,2State,2Country,WardGeoCode,StakeGeoCode,Sex,@Birth,@Baptized,@Confirmed,@Endowed,@RecExp,Priesthood,Mission,Married,SpouseMember,SealedtoSpouse,SealedtoPrior)
set
Birth = str_to_date(@Birth, '%d %b %Y'),
Baptized = str_to_date(@Baptized, '%d %b %Y'),
Confirmed = str_to_date(@Confirmed, '%d %b %Y'),
Endowed = str_to_date(@Endowed, '%d %b %Y'),
RecExp = str_to_date(@RecExp, '%b %Y');

7.  delete from Membership where IndivID = 0;

8.  Query that bad boy!
select * from
(
    SELECT HH.PreferredName "Name", S.PreferredName "SpouseName", HH.Street1, HH.HouseholdPhone, HH.IndividualPhone "CellPhone", S.IndividualPhone "SpouseCellPhone"
    FROM mls.membership HH,
         mls.membership S
    where HH.HHPosition in ('Head of Household','Spouse')
    and HH.HofHID = S.HofHID
    and S.HHPosition = 'Spouse'
    UNION
    Select PreferredName "Name", NULL "SpouseName", Street1, HouseHoldPhone, IndividualPhone "CellPhone", NULL "SpouseCellPhone"
    from mls.membership
    where married = 'Single'
    and HHPosition = 'Head of Household'
) PhoneListing
where Name &lt;&gt; SpouseName
or SpouseName is null
order by Name;

A few things to watch out for....

-&gt; The date formats.  At least MySQL is pretty picky about those.
-&gt; Rather than HHPosition, you could also query off of HHOrder.  Either way, I'm a bit leary of "smart" data.  Keep an eye on the data to make sure the key columns don't change in value.

Adding in children to MLS report

This goes along with my previous post on extracting the MLS data to create a usable phone listing.  This takes that and adds in the children's names and birthdays.  Note:  you really need Perl and cygwin installed on your Windows machine.  Seriously.

1.  mlsdirectory.sql
select * from
(
    SELECT HH.HofHID, HH.PreferredName "Name", S.PreferredName "SpouseName", HH.Street1, HH.HouseholdPhone, HH.IndividualPhone "CellPhone", S.IndividualPhone "SpouseCellPhone"
    FROM mls.membership HH,
         mls.membership S
    where HH.HHPosition in ('Head of Household','Spouse')
    and HH.HofHID = S.HofHID
    and S.HHPosition = 'Spouse'
    UNION
    Select HofHID, PreferredName "Name", NULL "SpouseName", Street1, HouseHoldPhone, IndividualPhone "CellPhone", NULL "SpouseCellPhone"
    from mls.membership
    where married = 'Single'
    and HHPosition = 'Head of Household'
) PhoneListing
where Name &lt;&gt; SpouseName
or SpouseName is null
order by Name;

2.  mlschildren.sql
Select HofHID, PreferredName, Birth
    from mls.membership
where HHPosition = 'Other';
 
3.  mlsdirectory.pl
open (MLS, 'c:\temp\mlsdirectory.out');
open (TSV, '&gt;c:\temp\mlsdirectory.tsv');
while (&lt;MLS&gt;)
{
    chomp;
    ($HofHID, $HofHNAME, $SpouseNAME, $address, $homephone, $cellphone, $spousephone) = split("\t");
    print (TSV $HofHNAME);
    print (TSV "\t");
    print (TSV $SpouseNAME);
    print (TSV "\t");
    print (TSV $address);
    print (TSV "\t");
    print (TSV $homephone);
    print (TSV "\t");
    print (TSV $cellphone);
    print (TSV "\t");
    print (TSV $spousename);
    print (TSV "\n");
    open (CHILD, 'c:\temp\mlschildren.out');
    while (&lt;CHILD&gt;)
    {
        chomp;
        ($CofHID, $CofPNAME, $cbirth) = split("\t");
        if ($CofHID == $HofHID)
        {
            print (TSV $CofPNAME);
            print (TSV "\t");
            print (TSV $cbirth);
            print (TSV "\n");
        }
    }
    close (CHILD);
}
close (MLS);
close (TSV);

4.  mlsdirectory.bat
d:
cd \MySQL\MySQL Server 5.5\bin

mysql -uroot -pxxxxxx &lt; c:\scripts\mlsdirectory.sql &gt; c:\temp\mlsdirectory.out
mysql -uroot -pxxxxxx &lt; c:\scripts\mlschildren.sql &gt; c:\temp\mlschildren.out

c:
cd\scripts
perl mlsdirectory.pl

This creates a tab delimited file.

Find and break Oracle jobs

When doing a production clone, it is often useful to drop all the jobs, database links, etc.  The following is a bash script to do just that.

########################################################################
# Break all the freakin' jobs so that Streams can instantiate!         #
########################################################################

if [ "${1}" = "" ] ; then
  echo "Usage ${0} sid,  exiting..."
  exit 1
fi
. /home/oracle/.bashrc.CLIENT

TARGET_SID=${1}
PRIVATE_FILE=/home/oracle/.private

MASTER_USER=system
MASTER_PASS=`cat $PRIVATE_FILE|grep -i "${TARGET_SID}:"|grep ":SYSTEM:"|head -1|awk -F: '{print $3}'`
echo $MASTER_USER
echo $MASTER_PASS

# Find 'em and Break 'em
sqlplus -s "$MASTER_USER/$MASTER_PASS@$TARGET_SID" &lt;&lt;EOF
set pages 9999
set lines 300
set trim on
set trims on
set feedback off
set termout off
set verify off
set head off
set newpage none
set echo off
set term off
spool OUT/create_procedures.sql

select 'create or replace procedure '||schema_user||'.drop_job_'||job||' as '||chr(10)||'begin'||chr(10)||'dbms_job.broken ('||job||', true);'||chr(10)||'end;'||chr(10)||'/'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';


spool off;

spool OUT/run_procedures.sql

select 'exec '||schema_user||'.drop_job_'||job||';'||chr(10)||'commit;'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

spool OUT/drop_procedures.sql

select 'drop procedure  '||schema_user||'.drop_job_'||job||';'
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

@ OUT/create_procedures.sql
@ OUT/run_procedures.sql
@ OUT/drop_procedures.sql
exit
EOF

Count the number of text occurences in a file

Just another boring work note.  Sorry, not that entertaining.  But if you are on Linux, and want to the number of occurences of each word in a file, this rocks!

 awk 'NF{ count[ toupper( $0 ) ]++}
END{ for (name in count ) { print name " appears " count[ name ] " times" };
}' /tmp/companyname_services.log

In this instance, the input file was /tmp/companyname_services.log

Pl/SQL + Bash to compare v$session to max allowed

Assuming you have this encapsulated into a bash script that has set your environment correctly.  This will give you a percent used value, captured into the MAXPERCENT shell variable.

        MAXPERCENT=`sqlplus -s -L ${MONITOR_USER}/${CHECK_PASS}@${SID}  &lt;&lt;EOF
              set pages 9999
              set lines 132
              set trim on
              set trims on
              set echo off
              set recsep off
              set head off
              set feedback off
              set newpage none
              set serveroutput on
                DECLARE MaxAllowed number(4);
                CurrentSessions number(4);
                UsedPercent number(5,2);
                begin
                select value
                into MaxAllowed
                from v\\$parameter
                where name = 'sessions';
                select count(*)
                into CurrentSessions
                from v\\$session;
                UsedPercent := round((CurrentSessions / MaxAllowed)*100,0);
                dbms_output.Put_line(UsedPercent);
                end;
                /
        exit
EOF`