Using Powershell to delete lines from a text file based on certain keywords

In continuation with the previous post, there was an issue that was encountered in a certain text file. There was a lot of redundant data in the output file. Let’s say like this :

“Error 11111 : The check finished without errors”

“Error 2222 : Login succeeded for user AMAM”

We needed to eliminate lines from the output file that had these lines. So how do you do that?

You can add this line to your file :

Get-Content $OutputFile | Where-Object {$_ -notmatch ‘finished without errors’ -And $_ -notmatch ‘Login succeeded for user’ } | Set-Content Output_2.txt

So as you can see, the idea is to use the $_ -notmatch operator followed by your string. To combine multiple phrases, use the -And operator.

The only issue that I faced was that I had to output to a separate text file called Output_2.txt. The command did not work in my parent text output file.

Thx to this site :

http://www.computerperformance.co.uk/powershell/powershell_match.htm#Example_4: More_Comparitors_-NotMatch_also_-And_

Advertisements
Posted in Uncategorized | Leave a comment

Using Powershell to run scripts against multiple servers

Recently I started digging up Powershell. I am amazed at how elegant and powerful it is, not to mention it’s speed !

To start, I browsed to Start > All Programs > Accessories > Windows Powershell > Windows Powershell where we get the Powershell (PS) command prompt.

You can navigate to the folder of your choice using the common “cd ” format of DOS. For example, to get to G:\Powershell_test_folder, just type : “cd G:\Powershell_test_folder”. I have saved some files in this location, hence the reason to navigate here.

Now I have saved the names of all my target SQL Servers in a text file at that location called AllServers.txt. You can simply add the names one by one. I used the “IP,port_number” format like this :

Server_1

Server_2

Server_n

Now we can add the query that we want to run against these n servers in a Query.sql file at that location. Lets say we want the database names in that server. So save this query in the Query.sql file :

select name from sys.databases 

Now we will run a Powershell script to run this query against all the servers.

But before we do that, we will need to add these two lines in the Powershell command prompt :

PS > Add-PSSnapin SqlServerCmdletSnapin100

PS > Add-PSSnapin SqlServerProviderSnapin100

Here is the PS script (lets call it PScript1.ps1):

$QueryPath=”G:\Powershell_test_folder\Query.sql”
$OutputFile=”G:\Powershell_test_folder\Output.txt”

$ExecuteQuery= Get-Content -path $QueryPath | out-string

“Results — > `r`n`r`n” > $OutputFile

FOREACH($server in GC “G:\Powershell_test_folder\AllServers.txt”)
{
$server
“——————————-” >> $OutputFile
$server >> $OutputFile
“——————————-” >> $OutputFile
invoke-sqlcmd -ServerInstance $server -query $ExecuteQuery -querytimeout 60000 | ft -autosize | out-string -width 4096 >> $OutputFile
}

 

Ok, so how does the script work ? We define three variables using the $ symbol :

  1. $QueryPath : This shows the location of the query file.
  2. $OutputFile : This is where the results of your query appear.
  3. $ExecuteQuery : This will read (get-content or gc) your input query and write the “Results” string to your output file.

Now the FOREACH command will iterate through your servers in the AllServers.txt file. It does this by reading (GC or Get-Content) the AllServers.txt file. It then outputs the name of your server first, followed by the results of your query (in this case the list of databases on your server).

The query is executed using the Invoke-Sqlcmd command let. This is just like running the usual sqlcmd command. The ” | ft” means it will output the result in a formatted table output.

In my case, I used a more complex query to collect the details of the errorlogs. The following query used in the Query.sql file will run the xp_readerrorlog command to find wherever the ‘ERROR’ or ‘FAIL’ statements were recorded in the SQL Server Errorlogs for the past two days :

declare @Start varchar(30);
declare @End varchar(30);
set @Start=convert(varchar(30),getdate()-3,25);
set @End=convert(varchar(30),getdate()+1,25);
exec master..xp_readerrorlog 0, 1, N’error’, NULL, @Start, @End
exec master..xp_readerrorlog 0, 1, N’fail’, NULL, @Start, @End

To execute the PS script, simply type this at your PS command prompt :

PS > .\PScript1.ps1

Powershell is fast and awesome! There is still a lot to learn and study about this. Here are some resources that helped me (thx to Pinal Dave, Raushan,  Sandip Shinde, and others) :

https://blog.sqlauthority.com/2014/11/23/sql-server-fix-the-term-invoke-sqlcmd-is-not-recognized-as-the-name-of-a-cmdlet/

http://www.sqlservercentral.com/scripts/powershell/129948/

SQL Sever – PowerShell for Executing a SQL Query on Multiple Servers

https://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/

 

 

 

Posted in Uncategorized | Leave a comment

Importing data from Oracle database to SQL Server using SSIS

Let’s say we have a couple of tables in Oracle that need to be imported to SQL Server. Note that only certain tables need to be imported, and not the entire schema (packages, functions, etc).

One of the easiest ways to do this is to import using SSIS. Here, I will list the steps taken to import from Oracle 11g to SQL Server 2012 database using SSIS installed with SQL Server 2008 R2.

Launch BIDS and create a new Integration Services Project. Create two new Connections in the Connection Manager. One of them should connect to Oracle. I used “Microsoft OLE DB Provider for Oracle”. Your Oracle connection should look like this :

Oracle_CM

As you can see above, you need to provide the correct Server name (Oracle database instance) and your username/password to connect to the Oracle schema.

Next, you need to create a SQL Server connection. I used the “SQL Server Native Client 10.0” provider for this. Here is a screenshot :

SQLServer_CM.jpg

So again, provide the instance detail (including the port number) in ServerName, and choose either Windows or SQL Server authentication to connect to the server. Once authenticated, the databases drop-down will be populated, and you can choose your SQL Server database to import the Oracle tables to.

Now let’s import the data. Drag a new Data Flow Task into the Control Flow pane. Double click to get to the Data Flow pane.

ControlFlow_pane.JPG

DataFlow_pane.jpg

Your “OLE DB Source” is the Oracle database, so double click it and assign the Oracle connection that you created earlier. Once it connects to the Oracle db, it will load all the tables and views in the table drop-down. Choose the table that you want to import. This may take a while if there are a lot of Oracle schemas on the source database. Now it should look like this:

Oracle_01.jpg

Now double click the “OLEDB Destination”. This is your SQL Server target database. Provide the SQL Server connection that you gave earlier. You can choose to create a new table to import the data to :

SQL_01.jpg

In general, you dont need to change any default values.

You need two more steps to complete the process:

  1. Assign AlwaysUseDefaultCodePage=TRUE in the Source properties :

DefCodePage.jpg

2. Set “Run64BitRuntime” to False in the main Project Properties pane. Apparently, this is because there is some issue in the 32-bit drivers vs the 64-bit drivers. You can check more here : https://blogs.msdn.microsoft.com/amitlale/2007/12/11/ssis-64-bit-setting/

Run64.jpg

That’s it. Now run the package and it should seamlessly import the Oracle tables to your SQL Server database.

Posted in Uncategorized | Leave a comment

Using “WITH VALUES” in a DEFAULT clause

The WITH VALUES clause is simple to understand.

Suppose we have a table called Orders  :

Orders_table_main.jpg

Now we add a column with a default value :

  • alter table orders add newcol varchar(100) default 10
  • insert orders (orderid,orderdate,custid) values (101,getdate(),1)

Now query the table again :

Orders_1.jpg

You can see that the new column has default values only for the next new row. So the original row, ie with orderid 100, does not have the new default value applied.

Now drop the column and add it again WITH VALUES :

  • alter table Orders drop column newcol;
  • alter table orders add newcol varchar(100) default 10 with values

Now query the table :

Orders_2.jpg

You can see that specifying WITH VALUES has updated all rows in the table, including the first one with orderid 100.

Hope this clarifies the use of WITH VALUES.

 

Posted in Uncategorized | Leave a comment

Primary Database xxx has active log shipping secondary database(s). Drop the secondary database(s) first.

Ran into a bit of a glitch with Log Shipping today.

As you know, there are three simple documented steps to remove log shipping from your server:

  1. Remove log shipping on primary using :

    EXEC master.dbo.sp_delete_log_shipping_primary_secondary

  2. Remove log shipping on secondary usingsp_delete_log_shipping_secondary_database
  3. Remove log shipping on primary using : sp_delete_log_shipping_primary_database

 

Now, there was a problem here. The second step should ideally drop the database, but it didnt. So when it came to the 3rd step, it threw up this error :

Primary Database xxx has active log shipping secondary database(s). Drop the secondary database(s) first.

So the secondary database was dropped and this command was executed again. It still failed. Something was not right.

The issue is that the information about this database still exists in a system table. You need to query that and remove the record:

select * from msdb.dbo.log_shipping_primary_secondaries

Once you delete the record, you can safely run the 3rd step and it will succeed.

 

 

Posted in Uncategorized | Leave a comment

SQL Server Export/Import Wizard and the missing Identity property (Or : How I learnt not to trust the Wizard)

SQL Server comes equipped with a fantastic wizard for importing/exporting data. It’s so simple : just right click on your database > Tasks > Export Data (or Import Data). What could be simpler, right ?

No, sir, this is one deceptive tool…

Suppose I create a table like this : just an integer column with an identity value, and a varchar column :

create table Birdman (
bid int identity(1,1),
bname varchar(100))

Lets say I want to export this table to another database. I fire up the Export Wizard and run through the steps. In the “Column Mappings” section, I dutifully click the “Enable identity insert” checkbox as I know this will involve an identity column :

Identity checkbox

And it succeeds.

Now script out the newly created table in the other (target) database, and see what you get :

CREATE TABLE [dbo].[Birdman](
[bid] [int] NOT NULL,
[bname] [varchar](100) NULL
) ON [PRIMARY]

Dude, where’s my identity property ?????

Now as it turns out, the SQL Server Import/Export Wizard has a nasty way of ignoring the Identity setting when it copies a table across. It also conveniently ignores Constraints, Indexes, etc. In short, it is thoroughly unreliable to transfer database objects with this method.

A safer approach would be to script out the object and create it on the target. Once the table is created, use the Import/Export Wizard to copy data across databases.

Hope this helps someone out there !

Posted in Uncategorized | Leave a comment

Of Profiler, Deadlock Graphs and XDLs

SQL Server Profiler is deprecated in favour of the more versatile Extended Events, but it still is a mighty useful tool to have.

You can save the Deadlock graphs separately (as .XDL files) when setting up Profiler. Once you select the necessary lock/deadlock events in the “Events Selection” tab, a new tab will appear called “Event Extraction Settings”. There, you have the option of saving the Deadlock graphs separately, perhaps in another folder. This can be very useful instead of sifting through a large Profiler trace file trying to find the Deadlock graphs.

Of course, if you havent done this earlier, and all you have is the profiler trace file, there is still another method to get the XDL files. Find the line with the deadlock graph (in Profiler) and right click on it and select “Extract event data”. You can then save it as an XDL file in another folder.

So now you have saved the XDLs and you want to see them. You can open them with SQL Server Management Studio to see the graphical view. To zoom in, there is no right click option; instead the solution is embarassingly simple : hold down CTRL and roll the mouse scroll wheel ….

But to get the full info, its better to open with Notepad++, which automatically formats the XML to give a nice presentable view.

The XML output might look daunting at first, but we can decipher the contents with a few chosen tags. You will notice that the entire XML is located between <deadlock-list> and </deadlock-list> tags.

The first tag to observe is the <deadlock victim=”xxx”> tag. As the name implies, this is the victim; ie this process was killed in the cyclic lock. The process is represented by a process id (“xxx” in this example), and you can also get other details in the next tag.

That next tag is <process id>. Here, it begins the tag by repeating the process id (eg: <process id=”xxx” ) and continues with other details. The important one to take away is the spid detail. It will show you what was the spid of this process that was terminated.

Now observe the first <fname> tag. It will show you what was the exact query that was deadlocked. It can be a single query, or a procedure. If it’s a procedure (or a function), it will show you the exact line that caused the deadlock rollback. Even though this is very useful, it can sometimes be more intuitive to go into the procedure and look at the preceding code. This is simple. Suppose line # 99 caused this rollback. Now script out your procedure and remove the formatting lines that SQL Server has added. This will include the USE database line, the GO line, perhaps a SET ANSI_NULLS ON or SET QUOTED IDENTIFIER On, the procedure name enclosed within “/***” and “***/” tags, more GO lines until you reach the first line of your actual procedure. Then Ctrl+G to goto the line number you want, eg: 99.

If you skip the removing of the first lines as above, you may find yourself puzzled when line 99 refers to an empty line or a comment 😉

Now you know the details of the victim using just 3 tags : the <deadlock victim>, <process id> and <fname> tags. Look for the </process> which will signal the end of the victim process information.

You can use the three tags above to now find the victorious process(es). Depending on your query, you may be lucky to find just two processes interlocked, ie a victor and a victim. But in some cases, there may be many more of these. In either situation, the 3 tags above will help you find out the details of these processes.

Note, for example, the first <process id> tag that appears immediately after the </process> tag of your victim. You can similarly deduce the spid details, the lock modes, the time waited on it, the query details and so forth.

Ok thus far, we have spoken of the Process details. There is also a Resource section in this XML, which is represented by <resource-list> and </resource-list>. Within each <resource-list> tag, you can see very concise text regarding what the problem code is, and what type of lock was applied, etc. This corresponds exactly to the details that were captured earlier in the <process> section.

Once you have isolated the Profiler details like this, you can return to the code and start figuring out how to prevent deadlocks.

Posted in Uncategorized | Leave a comment