Monday, August 5, 2013

Migrating from Access to Oracle

Two Options.  One provides a lot more detail if you need it (constraints, indexes, ddl for tables, control files for Sql Loader, etc.).  The other simply creates a delimited text file that can then be imported using SQL Loader.

Option A - Access Export.
Pretty simple strategy here.
  1. Open the Access file.
  2. Go to the Tables section.
  3. Double click on the table to open it.
  4. Right click on the table, then choose Export > Text File.
  5. In the dialog box, choose a file name.  Make sure the 3 boxes are all unchecked.  Click OK.
  6. Choose Delimited, then click Next.
  7. Choose Comma delimited, with a Text Qualifier of a double quote.
  8. Click on Next, then Finish.
  9. In Oracle, create a table that matches the Access one. 
  10. Create the control file for Sql Loader.
  11. Now use Sql Loader to load the data into your new Oracle table.
Option B - Sql Loader.
This did not work as advertised for me.  Here are the steps I took to finally be able to migrate a single table from Access to Oracle, using the Sql Developer utility to do this.
  1. Use SqlDeveloper 3, not the newest 4.  4 does not seem to map any data types, so it fails miserably.
  2. Connect to the target instance.
  3. Right click, Migration Repository, then Associate Migration Repository.
  4. Tools, Migration, Microsoft Access Exporter, Run Microsoft Access 2007 Exporter....
  5. This should bring up Access.  Pretty simple.  Pick the mdb to migrate, make sure the "migrate data" is checked.  I used c:\temp for the output directory.
  6. Now, back in SqlDeveloper, Tools, Migration, Migrate.  Pick your target connection, give it a name, choose an output directory.  Whenever you have the choice, pick "Offline"!  This is key, as we are going to hack some files when we are done.
  7. Once the migration finishes, it should bring up the script in a worksheet.  Just find the part with the create table ddl, and run that to create your target table.
  8. Now that you have an empty table, shell out to a command prompt and drill down until you find a control file for Sql Loader.  Put in the correct location of the data file.  For me, I put this all under c:\temp, so after drilling around a bit it wasn't too hard to locate the data and ctl files.
  9. Also, check your date formats.  For me, it was not really close.  The dates were in a mm/dd/yyyy hh24:mi:ss format, but the control file had them masked completely wrong.
  10. Now, use Sql Loader to load the data into your newly created table.
Ugly, but it did finally work.

Mike

Tuesday, June 25, 2013

Drop db links and jobs in other schemas

A quick workaround to working in other schemas when you don't know the password.

############################################################################

if [ "${1}" = "" ] ; then
  echo "Usage ${0} sid,  exiting..."
  exit 1
fi
. /home/oracle/.bashrc.CLIENT

TARGET_SID=${1}
PRIVATE_FILE=/home/oracle/.private

MASTER_USER=system
MASTER_PASS=`cat $PRIVATE_FILE|grep -i "${TARGET_SID}:"|grep ":SYSTEM:"|head -1|awk -F: '{print $3}'`
echo $MASTER_USER
echo $MASTER_PASS

# Find 'em and Drop 'em
sqlplus -s "$MASTER_USER/$MASTER_PASS@$TARGET_SID" <<EOF
set pages 9999
set lines 300
set trim on
set trims on
set feedback off
set termout off
set verify off
set head off
set newpage none
set echo off
set term off
spool OUT/create_procedures.sql

select 'create or replace procedure '||owner||'.drop_dblink_'||rownum||' as '||chr(10)||'begin'||chr(10)||'execute immediate ''drop database link '||db_link||''';'||chr(10)||'end;'||chr(10)||'/'||chr(10)
from dba_db_links
where owner not in ('SYS', 'SYSTEM', 'SYSMAN')
order by owner, db_link;


spool off;

spool OUT/run_procedures.sql

select 'exec '||owner||'.drop_dblink_'||rownum||';'||chr(10)||'commit;'||chr(10)
from dba_db_links
where owner not in ('SYS', 'SYSTEM', 'SYSMAN', 'STRMADMIN')
order by owner, db_link;

spool off;

spool OUT/drop_procedures.sql

select 'drop procedure  '||owner||'.drop_dblink_'||rownum||';'
from dba_db_links
where owner not in ('SYS', 'SYSTEM', 'SYSMAN', 'STRMADMIN')
order by owner, db_link;

spool off;

@ OUT/create_procedures.sql
@ OUT/run_procedures.sql
@ OUT/drop_procedures.sql
exit
EOF


########################################################################

if [ "${1}" = "" ] ; then
  echo "Usage ${0} sid,  exiting..."
  exit 1
fi
. /home/oracle/.bashrc.CLIENT

TARGET_SID=${1}
PRIVATE_FILE=/home/oracle/.private

MASTER_USER=system
MASTER_PASS=`cat $PRIVATE_FILE|grep -i "${TARGET_SID}:"|grep ":SYSTEM:"|head -1|awk -F: '{print $3}'`
echo $MASTER_USER
echo $MASTER_PASS

# Find 'em and Break 'em
sqlplus -s "$MASTER_USER/$MASTER_PASS@$TARGET_SID" <<EOF
set pages 9999
set lines 300
set trim on
set trims on
set feedback off
set termout off
set verify off
set head off
set newpage none
set echo off
set term off
spool OUT/create_procedures.sql

select 'create or replace procedure '||schema_user||'.drop_job_'||job||' as '||chr(10)||'begin'||chr(10)||'dbms_job.broken ('||job||', true);'||chr(10)||'end;'||chr(10)||'/'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';


spool off;

spool OUT/run_procedures.sql

select 'exec '||schema_user||'.drop_job_'||job||';'||chr(10)||'commit;'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

spool OUT/drop_procedures.sql

select 'drop procedure  '||schema_user||'.drop_job_'||job||';'
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

@ OUT/create_procedures.sql
@ OUT/run_procedures.sql
@ OUT/drop_procedures.sql
exit
EOF


Tuesday, April 30, 2013

Desktop Linux

I've been running Ubuntu on my desktop, since leaving Novell back in 2012.  I was feeling a bit nostalgic the other day, and decided to build out a VM with the latest version of OpenSUSE.  Let me share a few bullet points with you about this experience.

1.  The install of OpenSUSE took twice as long.

2.  Boot time is incredibly slow with OpenSUSE.  Honestly, I can start booting it, go get a drink, startup my Ubuntu VM, reboot it a couple of times, get up and go to the bathroom, come back and OpenSUSE will still be booting.

3.  Response time is even slower.  It's painful actually.

4.  The update process is HORRIBLE.  When OpenSUSE first popped up and told me I had 174 updates, I thought, OK, no big deal, right?  Well the graphical update went into some sort of infinite loop.  I had to kill it after about the 6th time it restarted itself.  In the end, I opted for a simple "zypper patch" as root at the command line.  That  sucker has been running for over 2 hours.  By comparison, Ubuntu updates rarely take in excess of 5 minutes.

As much as I miss the good old Novell days, I think I am converted to running Ubuntu.

I must say, running OpenSUSE in seamless mode does  rock.  It is the one thing I've found it does better than Ubuntu.


Friday, April 5, 2013

Access SQL Server from Linux

Let's face it.  Even since back before they started labeling the sql server releases as 2000 whatever, there has been a need to access sql server from other operating systems.  I'm sure java would be a slick way to go, but for an  administrator used to living at a bash prompt, that just won't cut it.

Thankfully, as part of some consulting work, I stumbled upon FreeTDS.  It's pretty slick.
1.  Download FreeTDS from http://www.freetds.org/

2.  Install FreeTDS
 - copy the compressed file to the target server.
 - uncompress the software bundle.
 - cd into the FreeTDS source.
 - ./configure
 - make
 - make install
 - the default is to install to /usr/local

3.  Configure FreeTDS
 - cd /usr/local/etc
 - vi freetds.conf
 - add in a line for the target server
 # A typical Microsoft server
 [EvolveDB]
     host = 137.65.51.90
     port = 1433
     tds version = 7.0

4.  Query your server
 tsql -S <your name from the freetds.conf file> -U <username> -P <password>
 If my user was admin and my password was friday, server name EvolveDB
 tsql -S EvolveDB -U admin -P friday

5.  Use tsql in batch mode
 tsql is very limited.  Simply use redirects to either run scripts or capture output.
 For example, using the above connection, and running a script named mike.sql and capturing that to mike.out
 tsql -S EvolveDB -U admin -P friday < mike.sql > mike.out

I've found that tsql does some funky things with the output.  For example, you can pipe it directly to awk, but if you toss sed in there as well to just remove spaces, it blitzes the tabs as well.  I've had the best luck sticking a ~ in as a delimiter, then using that with awk to parse the columns back out.


Thursday, March 14, 2013

Test writing to an Oracle directory

Let's say you want to make sure you can write to a directory you created.

1.  select * from dba_directories where directory_path = '/usr/tmp';

2.  In my case, I had three.  I decided to test the ODPDIR directory.

3.
DECLARE
  l_file utl_file.file_type;
BEGIN
  l_file := utl_file.fopen( 'ODPDIR', 'ODPDIR.txt', 'W' );
  utl_file.put_line( l_file, 'Here is some text' );
  utl_file.fclose( l_file );
END;
/

4.  I ssh into my server, cd to /usr/tmp, and I see a file called ODPDIR.txt has been created.  Whoot!

Friday, February 8, 2013

Utility and Setup for Testing JDBC Connections

This is a Windows based solution.  Sorry Linux fans.  The utility is available in a Linux format, so I would assume this could easily be ported to Linux.

This was used for testing Oracle connections, but the utility should support any standard jdbc connection.

1.  Ensure java is installed and in your PATH.

2.  Download Jisql from http://www.xigole.com/software/jisql/jisql.jsp.  Note the names in the uncompressed LIB directory.

3.  Ensure you have the Oracle JDBC drivers installed and note the path to the jar files.

4.  Now, you should be ready for some simple testing.  A simple batch file like this will do the trick.
    @ECHO OFF

    :LOOPIT

    echo Query instance name from db

    date /t >> logfile.out

    time /t >> logfile.out

    echo First attempt using SID >> logfile.out

    java -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql -user system -password XXXXXXX
         -driveroraclethin -cstring jdbc:oracle:thin:@address-scan:1521:sidname-c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out

    echo Second attempt using SERVICE_NAME >> z:\mike\runit.out

    java -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql -user system -password XXXXXXX
    -driver oraclethin -cstring jdbc:oracle:thin:@//address-scan:1521/servicname.yourcompany.com -c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out
   
    choice /n /t 10 /D Y /M "pausing 10 seconds"

    goto LOOPIT

5.  You can alternatively use TNS entries for testing the connections.  This was the best way I found to test a more complicated setup, such as load balanced, fault tolerant connection.  Something like this: <em>(note - I made a copy of the tnsnames.ora file to c:\temp then trimmed it down to just have the two entries I wanted to test)</em>
    @ECHO OFF

    :LOOPIT

    echo Query instance name from db

    date /t >> logfile.out

    time /t >> logfile.out

    echo First attempt using TNSENTRY>> logfile.out

    java -Doracle.net.tns_admin="c:/temp/" -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql
    -user system -password XXXXXX -driver oraclethin -cstring jdbc:oracle:thin:@TNSENTRY-c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out

    echo Second attempt using TNSENTRY >> logfile.out

    java -Doracle.net.tns_admin="c:/temp/" -classpath lib/jisql-2.0.11.jar;lib/jopt-simple-3.2.jar;lib/javacsv.jar;D:/app/melquist/product/11.2.0/client_1/jdbc/lib/ojdbc6.jar com.xigole.util.sql.Jisql
    -user system -password XXXXXX -driver oraclethin -cstring jdbc:oracle:thin:@WAREHOUSE3 -c ; -query "select host_name from gv$instance where instance_number=userenv('instance');" >> logfile.out
   
    choice /n /t 10 /D Y /M "pausing 10 seconds"

    goto LOOPIT

Thursday, February 7, 2013

Commit Strategy for Large Transactions

Let's face it.  Every DBA has come across a need to update a LOT of records.  The problem usually is that a mass update can take forever, blow out your redo logs, lock out other access, etc.  Here is the solution.  This was written and used in Oracle, but I'm thinking could easily be modified to work with any ANSI compliant relational database.

DECLARE

CURSOR cur IS
   select * from owner.table_temp;
                 
ct NUMBER(5) := 0;
group_size NUMBER(5) := 5000;

BEGIN
    FOR srmvt in cur LOOP
        INSERT INTO owner.table   (<columns>)
            VALUES  (srmvt.<columns>);
       ct := ct + 1;
       IF ct >= group_size THEN
           commit;
           ct := 0;
       END IF;
   END LOOP;
   commit;
END;

I have also used this for updates as well.

DECLARE
CURSOR cur2 IS  
   select * from yourcompany.yourtable;
                
ct2 NUMBER(5) := 0;
group_size NUMBER(5) := 5000;
BEGIN
   FOR srmvt2 in cur2 LOOP
        update yourcompany.yourtable set process_flag = 'SUCCESS'
        where transactionid = srmvt2.transactionid;
       ct2 := ct2 + 1;
       IF ct2 >= group_size THEN
           commit;
           ct2 := 0;
       END IF;
   END LOOP;
   commit;
END;
/

Exclusion List Script

This may not be of interest to most of my blog readers, and I apoligize for that. However, after struggling with a good way to accomplish what I needed to for days, and now having come up with a somewhat good solution, I wanted to publish it out here so I don't lose it. No sense reinventing the wheel, right?

My problem was I needed a good way to track an exclusion list. Normally I'd dump it into a table, but this particular list needed to be used on dozens of databases. I didn't want to maintain multiple copies of the same data - that would be a nightmare. And I didn't want to create dozens of database links - I don't really like them anyway. I thought about hard coding the list in my script, but that seemed kind of ugly. And the list needs to be sql ready, which means each element needs to be enclosed in single quotes and be comma delimited. Let me tell you, bash was not crazy about the quotes. At one point, I had the list hard coded in my script, and had escaped the quotes. Not pretty. It looked something like this:
EXCEPTIONLIST="\(\'one\'\,\'two\'\,\'three\'\)"

Bare in mind the end result for the sql query just needs to be:
('one','two','three')

Option one. I created a table in my central database. I then queried it one time, put it in a variable, then used that for the queries to all the other databases. The bash code looks like this:
EXCLUDED_USERS=`sqlplus -s " / as sysdba"  &lt;
set pages 9999
set lines 132
set trim on
set trims on
set echo off
set recsep off
set head off
set feedback off
set newpage none
select username from owner.table_name;
exit;
EOF`

echo $EXCLUDED_USERS
# Init the variable, put on the left paren.
DELIMITED_LIST=`echo -ne '('`
for USER in $EXCLUDED_USERS
do
# reconstruct the list, but add in the commas and single quotes.
DELIMITED_LIST=`echo $DELIMITED_LIST"'"$USER"',"`
done
# Remove the last comma we really did not need.
DELIMITED_LIST=$(echo ${DELIMITED_LIST%\,})
# Add in the right paren.
DELIMITED_LIST=`echo -ne $DELIMITED_LIST")"`

Option two. Just for grins, I wanted to see if it would work with a space delimited variable list in the script. It looks like this. I ripped out the actual sensitive data.....

# Exclude listing. Put into multiple vars for readability.
# If you add a line, make sure you add it to the line that puts them all together.
# One space between entries please.
EXCLUDED_USERS1="blah blah2"
EXCLUDED_USERS2="blah3 blah4"
EXCLUDED_USERS=$EXCLUDED_USERS1$EXCLUDED_USERS2$EXCLUDED_USERS3$EXCLUDED_USERS4$EXCLUDED_USERS5

# Make sure we are set to a space delimiter
IFS=" "
# Init the variable, put on the left paren.
DELIMITED_LIST=`echo -ne '('`
for USER in $EXCLUDED_USERS
do
# reconstruct the list, but add in the commas and single quotes.
DELIMITED_LIST=`echo $DELIMITED_LIST"'"$USER"',"`
done
# Remove the last comma we really did not need.
DELIMITED_LIST=$(echo ${DELIMITED_LIST%\,})
# Add in the right paren.
DELIMITED_LIST=`echo -ne $DELIMITED_LIST")"`
EXCLUDED_USERS=$DELIMITED_LIST

The basic principle is the same either way. Capture the list into an array, then walk through the array, putting the quotes, commas and parentheses back in. Then it's good to go for the actual query:

USERLIST=`sqlplus -s "/as sysdba"   &lt;
set pages 9999
set lines 132
set trim on
set trims on
set echo off
set recsep off
set head off
set feedback off
set newpage none
select username
from dba_users
where account_status = 'OPEN'
and username not in
(select distinct owner
from dba_objects)
and username not in $EXCLUDED_USERS
order by username
/
exit
EOF`

And that is more than you ever wanted to know about working in IT.......

Sql Server status sql script

Yeah, I know.  Another boring work post.  I have to save these things somewhere!

/* The Sql Server mail procedure does not like single quotes or #TEMP tables.

To get around this, we use some tables but drop them when we are done.
*/

use master
go

/* Get the error messages, agent.
*/

CREATE TABLE Errors (LogDate datetime, ErrorLevel int, vchMessage varchar(255))
CREATE INDEX idx_msg ON Errors(LogDate, vchMessage)
INSERT Errors EXEC xp_readerrorlog 0, 2
go

/* Get the error messages, server.
*/
CREATE TABLE Errors2 (LogDate datetime, ProcessInfo varchar(255), vchMessage text)
CREATE INDEX idx_msg2 ON Errors2(LogDate)
INSERT Errors2 EXEC xp_readerrorlog 0, 1
go


SELECT B.name as Database_Name, ISNULL(STR(ABS(DATEDIFF(day, GetDate(),
MAX(Backup_finish_date)))), 'NEVER') as DaysSinceLastBackup,
ISNULL(Convert(char(10), MAX(backup_finish_date), 101), 'NEVER') as LastBackupDate
into BackupResults
FROM master.dbo.sysdatabases B LEFT OUTER JOIN msdb.dbo.backupset A
ON A.database_name = B.name AND A.type = 'D' GROUP BY B.Name ORDER BY B.name
go

/* We really don't need to see the backup messages that get written to the log.
*/
delete from Errors
WHERE vchMessage LIKE '%Log backed up%' or
vchMessage LIKE '%.TRN%' or vchMessage LIKE '%Database backed up%' or
vchMessage LIKE '%.BAK%' or vchMessage LIKE '%Run the RECONFIGURE%'
go

delete from Errors2
WHERE vchMessage LIKE '%Log backed up%' or
vchMessage LIKE '%.TRN%' or vchMessage LIKE '%Database backed up%' or
vchMessage LIKE '%.BAK%' or vchMessage LIKE '%Run the RECONFIGURE%'
go

/* Move to target database, the one with filegroups that tend to fill up.
Create the temp table back in master, though. That is where the others are.
*/
use target
go

SELECT
cast(b.groupname as varchar(30)) [groupname],
Name,
[Filename],
CONVERT (Decimal(15,2),ROUND(a.Size/128.000,2))
[CurrentAllocatedMBSpace],
CONVERT (Decimal(15,2),
ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2))
AS [SpaceUsedMB],
CONVERT (Decimal(15,2),
ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2))
AS [FreeSpaceMB]
into master..FileGroupSpace
FROM dbo.sysfiles a (NOLOCK)
JOIN sysfilegroups b (NOLOCK) ON a.groupid = b.groupid
ORDER BY b.groupname
go

use master
go

EXEC msdb.dbo.sp_send_dbmail
@recipients=N'melquist@yourcompany.com',
@body='Status report is attached',
@body_format='TEXT',
@subject ='SQL Server Status Report',
@profile_name ='yourcompany',
@query ='EXEC master..xp_fixeddrives;
select * from Errors;
select LogDate, cast (ProcessInfo as varchar(10)) [ProcessInfo], cast(vchMessage as varchar(150)) [Message] from Errors2;
select * from BackupResults;
SELECT name "Failed Jobs" FROM msdb.dbo.sysjobs A, msdb.dbo.sysjobservers B WHERE A.job_id = B.job_id AND B.last_run_outcome = 0;
SELECT name "Disabled Jobs" FROM msdb.dbo.sysjobs WHERE enabled = 0 ORDER BY name;
select groupname, sum(CurrentAllocatedMBSpace) [Currently Allocated Space (MB)], sum(SpaceUsedMB) [Currently Used Space (MB)], sum(FreeSpaceMB) [Current Free Space (MB)]
from FileGroupSpace
group by groupname',
@attach_query_result_as_file = 1,
@query_attachment_filename ='QueryResults.txt'
go

Drop table Errors
Drop table Errors2
Drop table BackupResults
Drop table FileGroupSpace
go

Creating a ward directory

Again, another "exciting" post, which is pretty much just Mike making notes so he can retrace his steps someday if needed.

1.  Log into MLS.  Export the Membership data.  If you need help with this, let me know.

2.  Transfer the csv file to your database server.  Well, in my case, I'm running MySQL on my local desktop.

3.  Log into MySQL!

4.  create database MLS;

5.  create table MLS.Membership
(IndivID int(5) not null,
FullName varchar(50) not null,
PreferredName varchar(50) not null,
HofHID int(5) not null,
HHPosition varchar(50) not null,
HHOrder int(2) not null,
HouseholdPhone varchar(20) null,
IndividualPhone varchar(20) null,
HouseholdEmail varchar (50) null,
IndividualEmail varchar(50) null,
Street1 varchar(100) null,
Street2 varchar(100) null,
DP varchar(20) null,
City varchar(20) null,
Postal varchar(10) null,
State varchar(25) null,
Country varchar(25) null,
2Street1 varchar(100) null,
2Street2 varchar(100) null,
2DP varchar(20) null,
2City varchar(20) null,
2Zip varchar(10) null,
2State varchar(25) null,
2Country varchar(125) null,
WardGeoCode int(10) null,
StakeGeoCode int(10) null,
Sex char(1) not null,
Birth date not null,
Baptized date null,
Confirmed date null,
Endowed date null,
RecExp date null,
Priesthood varchar(20) null,
Mission char(3) null,
Married char(10) null,
SpouseMember char(3) null,
SealedtoSpouse char(3) null,
SealedtoPrior char(3) null);

6.  load data local infile 'C:\\Users\\melquist\\Dropbox\\Mike\\church\\Membership.csv' into table MLS.Membership fields terminated by ','
enclosed by '"'
lines terminated by '\n'
(IndivID,FullName,PreferredName,HofHID,HHPosition,HHOrder,HouseholdPhone,IndividualPhone,HouseholdEmail,IndividualEmail,Street1,Street2,DP,City,Postal,State,Country,2Street1,2Street2,2DP,2City,2Zip,2State,2Country,WardGeoCode,StakeGeoCode,Sex,@Birth,@Baptized,@Confirmed,@Endowed,@RecExp,Priesthood,Mission,Married,SpouseMember,SealedtoSpouse,SealedtoPrior)
set
Birth = str_to_date(@Birth, '%d %b %Y'),
Baptized = str_to_date(@Baptized, '%d %b %Y'),
Confirmed = str_to_date(@Confirmed, '%d %b %Y'),
Endowed = str_to_date(@Endowed, '%d %b %Y'),
RecExp = str_to_date(@RecExp, '%b %Y');

7.  delete from Membership where IndivID = 0;

8.  Query that bad boy!
select * from
(
    SELECT HH.PreferredName "Name", S.PreferredName "SpouseName", HH.Street1, HH.HouseholdPhone, HH.IndividualPhone "CellPhone", S.IndividualPhone "SpouseCellPhone"
    FROM mls.membership HH,
         mls.membership S
    where HH.HHPosition in ('Head of Household','Spouse')
    and HH.HofHID = S.HofHID
    and S.HHPosition = 'Spouse'
    UNION
    Select PreferredName "Name", NULL "SpouseName", Street1, HouseHoldPhone, IndividualPhone "CellPhone", NULL "SpouseCellPhone"
    from mls.membership
    where married = 'Single'
    and HHPosition = 'Head of Household'
) PhoneListing
where Name &lt;&gt; SpouseName
or SpouseName is null
order by Name;

A few things to watch out for....

-&gt; The date formats.  At least MySQL is pretty picky about those.
-&gt; Rather than HHPosition, you could also query off of HHOrder.  Either way, I'm a bit leary of "smart" data.  Keep an eye on the data to make sure the key columns don't change in value.

Adding in children to MLS report

This goes along with my previous post on extracting the MLS data to create a usable phone listing.  This takes that and adds in the children's names and birthdays.  Note:  you really need Perl and cygwin installed on your Windows machine.  Seriously.

1.  mlsdirectory.sql
select * from
(
    SELECT HH.HofHID, HH.PreferredName "Name", S.PreferredName "SpouseName", HH.Street1, HH.HouseholdPhone, HH.IndividualPhone "CellPhone", S.IndividualPhone "SpouseCellPhone"
    FROM mls.membership HH,
         mls.membership S
    where HH.HHPosition in ('Head of Household','Spouse')
    and HH.HofHID = S.HofHID
    and S.HHPosition = 'Spouse'
    UNION
    Select HofHID, PreferredName "Name", NULL "SpouseName", Street1, HouseHoldPhone, IndividualPhone "CellPhone", NULL "SpouseCellPhone"
    from mls.membership
    where married = 'Single'
    and HHPosition = 'Head of Household'
) PhoneListing
where Name &lt;&gt; SpouseName
or SpouseName is null
order by Name;

2.  mlschildren.sql
Select HofHID, PreferredName, Birth
    from mls.membership
where HHPosition = 'Other';
 
3.  mlsdirectory.pl
open (MLS, 'c:\temp\mlsdirectory.out');
open (TSV, '&gt;c:\temp\mlsdirectory.tsv');
while (&lt;MLS&gt;)
{
    chomp;
    ($HofHID, $HofHNAME, $SpouseNAME, $address, $homephone, $cellphone, $spousephone) = split("\t");
    print (TSV $HofHNAME);
    print (TSV "\t");
    print (TSV $SpouseNAME);
    print (TSV "\t");
    print (TSV $address);
    print (TSV "\t");
    print (TSV $homephone);
    print (TSV "\t");
    print (TSV $cellphone);
    print (TSV "\t");
    print (TSV $spousename);
    print (TSV "\n");
    open (CHILD, 'c:\temp\mlschildren.out');
    while (&lt;CHILD&gt;)
    {
        chomp;
        ($CofHID, $CofPNAME, $cbirth) = split("\t");
        if ($CofHID == $HofHID)
        {
            print (TSV $CofPNAME);
            print (TSV "\t");
            print (TSV $cbirth);
            print (TSV "\n");
        }
    }
    close (CHILD);
}
close (MLS);
close (TSV);

4.  mlsdirectory.bat
d:
cd \MySQL\MySQL Server 5.5\bin

mysql -uroot -pxxxxxx &lt; c:\scripts\mlsdirectory.sql &gt; c:\temp\mlsdirectory.out
mysql -uroot -pxxxxxx &lt; c:\scripts\mlschildren.sql &gt; c:\temp\mlschildren.out

c:
cd\scripts
perl mlsdirectory.pl

This creates a tab delimited file.

Find and break Oracle jobs

When doing a production clone, it is often useful to drop all the jobs, database links, etc.  The following is a bash script to do just that.

########################################################################
# Break all the freakin' jobs so that Streams can instantiate!         #
########################################################################

if [ "${1}" = "" ] ; then
  echo "Usage ${0} sid,  exiting..."
  exit 1
fi
. /home/oracle/.bashrc.CLIENT

TARGET_SID=${1}
PRIVATE_FILE=/home/oracle/.private

MASTER_USER=system
MASTER_PASS=`cat $PRIVATE_FILE|grep -i "${TARGET_SID}:"|grep ":SYSTEM:"|head -1|awk -F: '{print $3}'`
echo $MASTER_USER
echo $MASTER_PASS

# Find 'em and Break 'em
sqlplus -s "$MASTER_USER/$MASTER_PASS@$TARGET_SID" &lt;&lt;EOF
set pages 9999
set lines 300
set trim on
set trims on
set feedback off
set termout off
set verify off
set head off
set newpage none
set echo off
set term off
spool OUT/create_procedures.sql

select 'create or replace procedure '||schema_user||'.drop_job_'||job||' as '||chr(10)||'begin'||chr(10)||'dbms_job.broken ('||job||', true);'||chr(10)||'end;'||chr(10)||'/'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';


spool off;

spool OUT/run_procedures.sql

select 'exec '||schema_user||'.drop_job_'||job||';'||chr(10)||'commit;'||chr(10)
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

spool OUT/drop_procedures.sql

select 'drop procedure  '||schema_user||'.drop_job_'||job||';'
from dba_jobs
where schema_user not in ('SYS', 'SYSTEM', 'SYSMAN')
and broken = 'N';

spool off;

@ OUT/create_procedures.sql
@ OUT/run_procedures.sql
@ OUT/drop_procedures.sql
exit
EOF

Count the number of text occurences in a file

Just another boring work note.  Sorry, not that entertaining.  But if you are on Linux, and want to the number of occurences of each word in a file, this rocks!

 awk 'NF{ count[ toupper( $0 ) ]++}
END{ for (name in count ) { print name " appears " count[ name ] " times" };
}' /tmp/companyname_services.log

In this instance, the input file was /tmp/companyname_services.log

Pl/SQL + Bash to compare v$session to max allowed

Assuming you have this encapsulated into a bash script that has set your environment correctly.  This will give you a percent used value, captured into the MAXPERCENT shell variable.

        MAXPERCENT=`sqlplus -s -L ${MONITOR_USER}/${CHECK_PASS}@${SID}  &lt;&lt;EOF
              set pages 9999
              set lines 132
              set trim on
              set trims on
              set echo off
              set recsep off
              set head off
              set feedback off
              set newpage none
              set serveroutput on
                DECLARE MaxAllowed number(4);
                CurrentSessions number(4);
                UsedPercent number(5,2);
                begin
                select value
                into MaxAllowed
                from v\\$parameter
                where name = 'sessions';
                select count(*)
                into CurrentSessions
                from v\\$session;
                UsedPercent := round((CurrentSessions / MaxAllowed)*100,0);
                dbms_output.Put_line(UsedPercent);
                end;
                /
        exit
EOF`