Thursday, September 20, 2012

Using Dell racadm to configure passwords with spaces fails on iDRAC 6

When using Dell racadm tool to reconfigure the iDRAC 6 an attempt to set a user password with spaces in it will fail.  The manual provides information on proper quoting rules and escape rules, but none of them work properly as of version 7.1.0 (Build 594).  This behavior occurs whether using the cfgUserAdminPassword property directly on the command line or through the use of a config file.

Errors are similar to the following:

ERROR: Invalid value in config file.
Group: cfgUserAdmin, Object: cfgUserAdminPassword, Value: "Some spacey password", Line [4]

or

ERROR: The specified object value is not valid.

It is possible to set a password with a space in it using the web interface.

Host state doesn't match specification when checking host profile compliance on ESXi 5

When checking host profile compliance on VMware ESXi 5, you receive errors similar to these:

Host state doesn't match specification: Rule ????? of class VAAI needs to be deleted from the host
Host state doesn't match specification: SATP VMW_SATP_DEFAULT_AA needs to be set to use Path Selection Policy VMW_PSP_FIXED by default
Host state doesn't match specification: device mpx.vmhba34:C0:T0:L0 Path Selection Policy needs to be set to default for claiming SATP

This persists despite changing the properties to "Enable/Disable Profile Configuration" and unchecking the relevant sections.

Solution:

Don't be so tidy!  You can create this problem by manually removing all unwanted entries from the profile.  Instead just uncheck the sections you don't want enabled.  Removing the entries should work, but it doesn't.  At a minimum retain the subsections under "Storage Configuration" titled "Pluggable Storage Architecture (PSA) configuration" and "Native Multi-Pathing (NMP)".

Thursday, August 30, 2012

SSIS Date Error Going From SSIS Variable to Stored Procedure using OLE DB

Quick Description:  An SSIS package which populates an SSIS variable of type DateTime and passes the variable as a parameter to a stored procedure (expecting DateTime) using OLE DB blows up with the error below.
[Execute SQL Task] Error: Executing the query "exec <procedure>..." failed with the following error: "Invalid time format". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
One Solution:  In my situation it turned out to be another run-in with my nemesis, the deeply confusing datatype mappings in SSIS (2012 in my case).  According to  this Microsoft article the mapping for SQL Server DateTime  is DT_DBTIMESTAMP.  For an Execute SQL task calling a stored procedure (using OLE DB) expecting a DateTime parameter, however, this blows up with the error above.  Setting the parameter to type DATE (not DT_DATE), fixed the problem for me, and the parameter is populated with a full Date/Time (including fractional seconds).

Tuesday, August 21, 2012

Quick C# Code to get the Current Windows User in a WebApp and do Something (like a lookup against a Database) with it.

I (not actually that recently anymore) changed positions from IT to development, and since it will probably be years before I'm an expert, I haven’t had much to post, lest I get another “Duh” comment.  I have no pride, so I’m going to start posting what I’m sure are ridiculously basic snippets in case they're helpful.

This isn’t particularly advanced.  I wanted to capture the windows username of the current user minus the domain, and look it up against an employee database to find employee info.  Basic, but can be cannibalized for a lot of basic operations on a username.

Things not to do:  For a WebApp, don’t get  WindowsIdentity.GetCurrent().Name;  this will appear to work while you’re debugging, but once it’s running, it will pull the User Account of the application pool.   I haven’t been able/didn’t try to test kerberos/pass-through authentication with it, but I’m guessing in that case the app id would return the current user id – still seems like a lot more work than getting it at the front door.

Things I don’t understand:  So many things, but in particular… How come Microsoft recommend using “using” specifically excluding IDisposable.Dispose and try…catch, but when I run code analysis it complains

Warning    7    CA2000 : Microsoft.Reliability : In method 'mymethod' call System.IDisposable.Dispose on object 'connectionobject' before all references to it are out of scope.

Weirdness notwithstanding, In the example below, I'm calling a stored procedure to look up employeeid in a database table, but any operation could be substituted.

using System.Data;
using System.Data.Sql;
using System.Data.SqlClient;
using System.Security.Principal;

      protected string GetEmployeeIDfromUname()  //returns a string, if you don't want to return anything use protected void
{
//Do not get the string below unless you want to know who the application is running as
                   //String empuname = WindowsIdentity.GetCurrent().Name; 
                   //Get Windows UserName of current user
string strEmpUname = HttpContext.Current.User.Identity.Name.ToString();
string strEmpUname parsed = strEmpUname.Substring(strEmpUname.LastIndexOf('\\') + 1); //Parse username to remove the domain name (remove everything up to and including the \)
string strEmployeeID; //variable to hold the return value;

//This section can be replaced by whatever action you want to do.
                      using (SqlConnection connectionEmpID = new SqlConnection("Your Connection String")) //Instantiate the connection and build the command as executable code and parameters 
{
SqlCommand cmdEmpID = new SqlCommand();
cmdEmpID = new SqlCommand("YourProc", connectionEmpID);
cmdEmpID.CommandType = CommandType.StoredProcedure;
//Pass the parsed username to the procedure
cmdEmpID.Parameters.Add("LOGINNAME", SqlDbType.VarChar).Value = strEmpUnameParsed ;
SqlParameter empidret = cmdEmpID.Parameters.Add("@EMPIDRET", SqlDbType.Int);
//define the output (Employee Number)
empidret.Direction = ParameterDirection.Output;
cmdEmpID.Connection.Open();
cmdEmpID.ExecuteNonQuery();
strEmployeeID = cmdEmpID.Parameters["@EMPIDRET"].Value.ToString();
}

return strEmployeeID;
}

C# simple mailto: link set to return value of a method

 

Quick Description:  A quick example of some methods to build a dynamic mailto: string for c#

I know mailto: links are not exactly the future, but for my purposes, they're very handy for handling things like adding some error information context to the "Click here to get help with this error" button everyone (right?) puts on their custom error page.  .Net has its own methods for messaging if you want to get emails in the background, but for things like error pages, it's nice not to have to assume the application is in a happy state if a user wants to tell you more about the ugly thing that just happened (hopefully it doesn't happen often, and obviously, the system logging happens in the background).

So, that being said, using the asp:HyperLink allows you to use its server side methods.    Below is a simple example of adding exception information

 
//do this work on the pre-render event of the hyperlink object.
protected void HyperLinkMailToMe_PreRender(Exception ApplicationException)
{
string strMessageBody = CheckNullExceptionInformation(CurrentException.Message, "Exception Message");
string strErrorID = "if you give errors unique ids";
string strMailtoLink = "mailto:myemailaddress?subject=Information regarding error " + strErrorID + "?Body=" + strMessageBody
HyperLinkMailToMe.NavigateUrl = strMailtoLink;
}
//check that the exception property is not null
public static string CheckNullExceptionInformation(object ExceptionProperty, string ExceptionPropertyLabel)
{
if (ExceptionProperty == null)
{
return ExceptionPropertyLabel + " contains no information";
}
else
{
return ExceptionProperty.ToString();
}
}

QlikView Date Headaches

Short Description:  In QlikView, some dates formatted with the date or date# functions don't load as prescribed.

I often run into date display issues with QlikView and wonder how it handles dates internally.  I wonder this because I relatively frequently run into the situation where I run a load script and date fields default to date format in the primary file, but on saving the product of a load script to a qvd file dates go wonky and start displaying as integers, or datetimes display as dates and/or vice versa.  I have some trouble understanding how data types "degrade" between the load and the save when being consumed by the *same application*, but explicitly forcing them usually works.

Queue today's problem:  In loading a relatively simple script with a few joins, all the date fields imported differently (even 'mm/dd/yyyy' in the same column as an int date, so...fun).  No problem; run them through date(field, 'MM/DD/YYYY') in the last operation.  Not so much, but surely date#(field, format) will work?  That was a no.  After much frustration, the culprit appears to have been a datetime field loaded from a query that used the tsql MAX operator.  Forcing it to datetime immediately after loading from the tsql was the only way I found to get around it.  Forcing it later in the script/in a different operation/right before saving to the qvd did no good.  

Your mileage may vary: but in the case of date fields (in QlikView) that just won't behave even when explicitly forced, it may make sense to quickly check if the initial load from the source system contains any aggregation operators.

Thursday, August 2, 2012

Things that SQL Server Data Tools makes easy as compared to BIDS

After all the hoopla about 2012 SSIS (and its dev environment) being so much better than 2008, I was originally a bit disappointed with the new version (other than fixing some of the lineage ID issues which is awesome), but after using it for a while there are several things that just work in the new version, but were painful before.

Microsoft's list of the official enhancements is here and there are some big things fixed and added (love the dashboard), but there a few much smaller improvements that have quietly fixed things that used to make me tear my hair out.

The little things that are a huge deal to me:

  • Ctrl-Z!  I've always been mystified as to why undo didn't exist, and after all this time still find myself attempting ctrl-z in BIDS for 2008 R2 out of habit formed by every other application ever.  Imagine my surprise when I instinctively hit ctrl-Z and got an undo before I could even remember that it's never worked before.  I have missed you so much ctrl-Z, please don't leave me again!
  • Saving the package:  "save a copy of package..." now defaults to the last type of save you did instead of having to change the selections every time.
  • You can hit enter in an annotation, and it expands to a point while entering data.

They're small things, but they make it feel like a grown-up environment.

Things I wish had been fixed:

  • Moving connectors between objects:  If I have a fancy conditional on a connector/precedence constraint and need to add or remove an object/sequence in the flow, it would make my life so much easier if I could re-position the connector rather than delete and re-create.

Wednesday, July 18, 2012

Unable to add/delete/view existing variables in SQL Server data tools (formerly BIDS) after package upgrade

Short Description:  The Add Variable button is greyed out in SQL Server Data Tools, in my case this was happening with packages upgraded from 2008 R2 to 2012.

I was running into an issue where I was unable to view or add variables in the Variables pane after upgrading packages from 2008 R2.  Everything except the Grid Options button was greyed out.  Eventually the ability would "magically" re-appear along with the existing variables, but the behavior was frustrating and a time-sink.

Workaround:  Unfortunately, I don't know what causes this (other than upgrading a package) or how common the problem is, but right clicking the design surface and selecting variables rather than opening the variables pane appears to force it to refresh and behave itself again.  It appears to only need to be refreshed once per package. 

Thursday, June 21, 2012

Oracle DATE Joins Fail for no Obvious Reason


Brief Description:  Attempting to join columns of the same DataType results in errors like the below, or produce no results when there are valid joins in the tables
ORA-01858:  a non-numeric character was found where a numeric was expected

image
Problem:
I had a mildly (okay, deeply) frustrating couple of hours being beaten up by this problem.   I had two tables, one a dimension with general date info and another a fact table with a couple of date columns.  All three columns showed as DATE datatype, but while column1 joined to the date dimension without issue, column2 failed explosively or returned no results and with a number of different errors depending on what variety of date conversion, no date conversion, convert to string, then to date, convert to date with the month spelled out...etc.  All dates looked the same format in query results (in my case using Toad).  My management studio was displaying in the painfully old 'DD-MON-YY' format (which may be an artifact of this particular database having been continuously upgraded for years).  Although I can't swear to what is actually happening under the covers, forcing the NLS_DATE_FORMAT for the session (or in the case of Toad, the startup script) fixed the problem for me.  This may not be a problem in SQL*Plus, but I've had my share of frustration with NLS_DATE in SQL*Plus, so it's possible it's also applicable there.
In Toad:
To fix for this session only:
ALTER SESSION SET NLS_DATE_FORMAT='MM-DD-YYYY';
To fix in Toad startup script:
  • On the ribbon menu go to View/Toad Options
  • Select Scripts
  • Under Login scripts edit glogin.sql
  • If this is the first time you've edited the script, add the text below under the comment block, otherwise, enter it below the last operation
  • BEGIN
       EXECUTE IMMEDIATE 'ALTER SESSION SET NLS_DATE_FORMAT=''MM-DD-YYYY''';
    END;
image

Tuesday, June 12, 2012

Least Helpful SSIS Error Message: DTS_E_PRIMEOUTPUTFAILED

Short Description:  An SSIS package throws this error on an OLEDB source component

[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on component "My OLEDB component" (1729) returned error code 0x80004005.  The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the failure.

Unfortunately, the actual error is almost always downstream from the OLEDB datasource, but isn't always reported.  If you're very, very, lucky and the downstream component hits  the error condition before the OLEDB source fails the primeoutput, the actual error may be recorded earlier in the log/Execution Results.   It seems that the error basically means that somewhere downstream there is a component that either doesn't like, doesn't fit, doesn't understand, or is missing a condition to deal with the data returned by the OLEDB datasource.  Turning up logging (SSIS menu, select logging, check the failing dataflow and add a provider (make sure to check the box next to it to enable it), then decide how much detail you want on the details page)  may or may not result in the error being logged.  If it doesn't show up with verbose logging, I'm not sure of a better way than triple checking, and trial and error removing components to locate the problem.

Just in case it's helpful, here are some times I've run into this condition while writing packages:

  • Unhandled null in a downstream conditional split
  • Trying to retrieve an Oracle old-school, deprecated "Long" datatype with the Microsoft OLE DB data source -- spoiler alert:  it doesn't work...it looks like it's working, but treats it as a text stream and as far as I can tell it then fails once it gets to one that doesn't look like enough like a text stream.   I don't know how to fix this other than phasing out datatypes deprecated last millennium.
  • Constraint violation (in this case, not null) on a downstream component.

Obviously, these conditions should cause the package to fail, but a more clear error message would be a real time saver.

Thursday, June 7, 2012

SSIS OLE DB task very, very, slow updating a small number of records

Description:  an SSIS OLE DB task (in this case an update) with only a few thousand records to update takes a totally, unbelievably, long amount of time to run against a large-ish (~6 million rows) table.

My Scenario:  Writing an SSIS package for ETL against a fairly large Fact Table with only a few thousand changes per day.  Records since the latest audit date are picked up by the task, then looked up and with a conditional split carved into Insert and Update.  Nothing fancy.  The insert (an OLE DB destination), runs quickly, but the update ran so long (Some unknown amount longer than 30min for less than 3k records) in test that I killed it because it was making me sad (also not feasible for our ETL window, but mostly the sad part).

The update (in my case) is pretty standard:  Check the numeric unique key (in our case, we don't enforce the source system primary key) against an (Oracle Date data type) audit field, and if it's newer, update it.  Full Disclosure:  because there are so many records and so few changes, I'm only pulling the fields more recent than the newest existing record in the destination table, but this isn't relevant to the update task problem (although it would cause exciting unnecessary work on the pull from source side).

The Problem:  Everyone else might already know this, but it didn't really hit me: when it says  "Executes an SQL command for each row in a dataset." it really means it.  I hadn't put a huge amount of thought into it before today, but until I really traced it and watched performance, I sort of had the impression it relied on some of the work already done by the preceding Lookup and Conditional Split tasks, and maybe sort of cached the lookups it had already done.  On a scale of 1 to finding out the the Colonial furniture appraiser from Antiques Roadshow is actually two twins, it's not an earth shattering revelation (you go your whole life believing the most shocking thing about colonial furniture is how much it would be worth if you'd retained the original "patina," and then you find out that guy is TWINS!).  The OLE DB command task is performing the updates 1 by 1 and thus doing a lookup against a big table also 1 by 1.  That's a whole bunch of work on 6 million records even though only 3 thousand need to be updated.

Skip My Rambling and Try This: While for truly enormous tables, it may be best to use a staging table approach, with a medium-big table the right indexes took it from an amount of time I gave up on finding out, to less than a minute.

Your Mileage May Vary (especially if your primary key isn't numeric):

A basic index on the audit date field and one on the key (if you're using a primary key, you already have this),  took the update down to less than a minute.  A combined index on audit field first/then the key also reduced the run time significantly (down to 10 minutes), but pales in comparison.

Friday, May 18, 2012

Joins fail converting nvarchar/varchar to int/numeric


Quick Description:  Attempting a join on two columns, one nvarchar, and one a numeric type, fails with error
Msg 8114, Level 16, State 5, Line 27
Error converting data type nvarchar to bigint. 

This just means your nvarchar column has some data that doesn't look like a number. 
I generally have to compensate for this when I'm joining a table that has nvarchar data that is meant to reference a key in  (in my case, it's usually trying to join tables with a 'liberal' policy on keys to a conformed dimension) a table with a key of a numeric type.  In my experience, the problem is usually variations on words like unassigned or unknown.  Since there might not be a strict policy on what goes into the table it's hard to anticipate what character strings to exclude (and often messy even if you do).
The obvious question is why doesn't sql have something like C# TryParse, and I tried to put something similar together below, but just quickly, if you want to join on two columns, one of which might have stray character data that should be discarded (it wasn't going to join anyway) .  You can use the kinda-sorta half of TryParse  'ISNUMERIC' function to test you can convert it to a number 'during' the join.  I'm sure there are lots of ways to do this, many of which are significantly more elegant, your mileage may vary.
Example Solution:
SELECT N.NUMBERS_ONLY, V.HAS_SOME_STRAY_STRINGS FROM TABLE_WITH_A_NUMERIC_COLUMNN N    JOIN TABLE_WITH_SOME_WIERD_STUFF_IN_THAT_COLUMN V
                    ON N.NUMBERS_ONLY = V.HAS_SOME_STRAY_STRINGS AND ISNUMERIC(V.HAS_SOME_STRAY_STRINGS) = 1
 

Straight Select:  If what you need to do is return a column

SELECT STRAYSTRINGS_TO_NUM =
    CASE 
        WHEN ISNUMERIC(COLUMN_WITH_SOME_STRINGS)= 1 THEN CONVERT(INT, COLUMN_WITH_SOME_STRINGS)
    ELSE
         'DO SOMETHING ELSE'
    END      
FROM TABLE_WITH_SOME_WEIRD_STUFF

Sort of like TryParse:  In case what you really need is to convert a value and the report success or failure of the conversion, this is what I came up with for a basic equivalent of c# TryParse.  You could

DECLARE @TRYPARSE_VAR nvarchar(255) = '123456'SELECT @TRYPARSE_VAR = RTRIM(LTRIM(@TRYPARSE_VAR))
IF ISNUMERIC(@TRYPARSE_VAR) = 1
    BEGIN
        SELECT @TRYPARSE_VAR = CONVERT(INT, @TRYPARSE_VAR)
        SELECT 1
    END
    ELSE
    BEGIN
        SELECT 0
    END

Or you could not bother with ISNUMERIC and just use a try/catch

DECLARE @TRYPARSE_VAR nvarchar(255) = '123456'SELECT @TRYPARSE_VAR = RTRIM(LTRIM(@TRYPARSE_VAR))
BEGIN TRY

        SELECT @TRYPARSE_VAR = CONVERT(INT, @TRYPARSE_VAR)
        SELECT 1
END TRY
BEGIN CATCH
        SELECT 0
END CATCH    

Thursday, March 29, 2012

QlikView Expression for Converting Unix int Epoch Dates to date

I wrote an earlier post on ways to do this in SSIS, T-SQL, and PL-SQL, but today needed to do it in a QlikView expression (or load script), since QlikView has AddMonths and AddYears functions, it wasn't immediately obvious that adding days is just a datetime or timestamp plus a numeric and cast back.

To get long datetime

=Timestamp(Timestamp('1970-01-01 00:00:00.000') + YOURUNIXDATEVALUE/1000/60/60/24)

To get date only

=Date(Date('1970-01-01 00:00:00.000') + YOURUNIXDATEVALUE/1000/60/60/24)

The only tricky thing is that in both cases, you're converting milliseconds to days, which makes sense with a date value, but slightly less with a value that's granular to the millisecond.

------------

If you just need to add days rather than convert the milliseconds since 1970 data type, you can just add whole days.  For example tomorrow's date:  =(date(today() + 1)).

Disabling or Enabling VMware Change Block Tracking (CBT) using PowerShell and PowerCLI

Recently I experimented with using the VMware specific backup technologies with a number of vendor products.  None of those technologies compared favorably to the simplicity of NetApp SnapMirror + NDMP when using NFS, but the solutions were vastly better than the old VCB based ones.  I could see using them in an environment that leveraged iSCSI or local storage.

After the POC I wanted to disable VMware Change Block Tracking on my non-production VMs.  I’ve run into one bug around CBT and would rather not leave it turned on without a reason.  It is simple to set the value using PowerShell, but to get it to take effect you need to power off the VM or take a snapshot.  Here is my solution:

Get the VMs with CBT enabled:

$vms=get-vm | ?{$_.ExtensionData.Config.ChangeTrackingEnabled -eq $true}

Create a VM Specification to apply with the desired setting:

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.ChangeTrackingEnabled = $false

Apply the specification to each VM, then create and remove a snapshot:

foreach($vm in $vms){
$vm.ExtensionData.ReconfigVM($spec)
$snap=$vm | New-Snapshot -Name 'Disable CBT'
$snap | Remove-Snapshot -confirm:$false}

Check for success:

get-vm | ?{$_.ExtensionData.Config.ChangeTrackingEnabled -eq $true}

Wednesday, March 21, 2012

Why does my AJAX Toolkit Miss Stephen so Much?

Quick Description:  After adding an AJAX toolkit control to a c# web application in VS 2010, a debug run blows up with error

Locating source for 'C:\Users\Stephen\Documents\Repos\Superexpert\AjaxControlToolkit\Server\AjaxControlToolkit\ExtenderBase\ExtenderControlBase.cs'

There’s a good reason for that: no-one named Stephen has ever used my sandbox machine.

Embarrassing to admit, but rule this out:

In searching on this problem, I found lots of questions, but I didn’t find a solution…which is probably a fitting punishment for skipping a step in the very clear instructions on the Control Toolkit CodePlex page.  In my case (there may be other causes for the error), it was resolved after verifying I’d followed the instructions to add the ToolkitScriptManager to the page (spoiler alert: I hadn’t).

Error Examples:

image

Call Stack Location:

AjaxControlToolkit.DLL!AjaxControlToolkit.ExtenderControlBase.OnLoad(System.EventArgs e = {System.EventArgs}) Line 306

Source File Information:

Locating source for 'C:\Users\Stephen\Documents\Repos\Superexpert\AjaxControlToolkit\Server\AjaxControlToolkit\ExtenderBase\ExtenderControlBase.cs'. Checksum: MD5 {56 64 14 a8 4c 29 25 94 27 e0 35 10 f1 b4 60 f2}
<rest of stack trace snipped>

And:

Could not find any resources appropriate for the specified culture or the neutral culture.  Make sure "AjaxControlToolkit.Properties.Resources.NET4.resources" was correctly embedded or linked into assembly "AjaxControlToolkit" at compile time, or that all the satellite assemblies required are loadable and fully signed.

Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture or the neutral culture.  Make sure "AjaxControlToolkit.Properties.Resources.NET4.resources" was correctly embedded or linked into assembly "AjaxControlToolkit" at compile time, or that all the satellite assemblies required are loadable and fully signed.

Friday, March 16, 2012

Enabling NLA for RDP connections using PowerShell

The critical nature of Microsoft bulletin MS12-020 may have you searching for a method to easily enable Network Layer Authentication on your Windows 2008 SP2/R2 servers to avoid an unscheduled patch installation and restart. It is possible to enable it with group policy, but that may not be the best method for every environment. If you have PowerShell remoting setup on all of your servers, you can easily enable NLA:

Enable/Disable NLA for Terminal Services or Remote Desktop Protocol RDP (0=Off, 1=On)

(Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").UserAuthenticationRequired
(Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(1)

For a single server you can enable the setting with invoke-command:

invoke-command –computername MyServer –scriptblock {(Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(1)}

You can use VMware PowerShell commands to generate the list of servers and execute the command on each server:

$vmnames=get-vm –location "Windows-NonProduction" | select-object -expandproperty name | sort
$sessions = New-PSSession -computername $vmnames
invoke-command –session $sessions –scriptblock {(Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(1)}

If you have text file with the server names you can use:

$sessions = cat (.\names.txt)
invoke-command –session $sessions –scriptblock {(Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(1)}

Friday, March 2, 2012

SSIS Timeout to a Web Service with Web Service Task

Quick Description:  The WebService task in SSIS throws an error similar to the one below, but the return code from the actual Web Service is 200.

The error:

[Web Service Task] Error: An error occurred with the following error message: "Microsoft.SqlServer.Dts.Tasks.WebServiceTask.WebserviceTaskException: The Web Service threw an error during method execution. The error is: The operation has timed out.
   at Microsoft.SqlServer.Dts.Tasks.WebServiceTask.WebMethodInvokerProxy.InvokeMethod(DTSWebMethodInfo methodInfo, String serviceName, Object connection)
   at Microsoft.SqlServer.Dts.Tasks.WebServiceTask.WebServiceTaskUtil.Invoke(DTSWebMethodInfo methodInfo, String serviceName, Object connection, VariableDispenser taskVariableDispenser)
   at Microsoft.SqlServer.Dts.Tasks.WebServiceTask.WebServiceTask.executeThread()".

If you have access to the WebService being called (or have written a proxy), the logs show that the service itself returned a successful status code 200.

This is misleading (or at least I thought it could be worded better).  It can mean that the WebService task  itself, rather than the WebService being called timed out.    This is made even more confusing by the fact the timeout is set not on the WebService task, but on the httpConnectionManager.  Default is 30 seconds.  Large amounts of data can take longer than 30 seconds to return without being a problem.  As far as I understand 300 seconds is the limit, which doesn’t seem long enough to me.

Timeout can be increased (up to 300 seconds) in Connection Managers,  yourHttpConnectionManager, properties, timeout.