Saturday, January 29, 2011

Having more than 2 logical drives in a HP ML350 G6

I've begun setup of a HP ML350 G6 and I've hit a wall. I've got 2 RAID drives (each having 2 HDDs in a mirror). I've now come to put a couple of extra disks into the server and it won't let me create the logical disks for the 2 new drives. I don't want to RAID them, just have them appear as normal disks.

When I try to add them from the RAID utility (F8 during boot) I get a message saying ORCA can't handle any more logical drives, and that I should use the array config utility to add them. I tried using the array config to add them but can't see how to do it. The disks are both picked up and labeled "un-allocated" but I can't find any way to allocate them.

  • The P410i controller you have in that machine can support up to 64 logicial drives/arrays - you have to define an array to put the allocated disks into, they can't just be on their own, they have to be part of a RAID array even if they're the only member.

    Arcath : ok... but how do i do that? The only thing i can find that indocates it will do anything with the disks is adding them as spare drives to existing arrays
    From Chopper3
  • First of all, you should check that you have the latest version of firmware and driver installed: first download and install the latest Support Pack (or equivalent of your OS) and then run Firmware Maintenance CD. It is important that in an upgrade scenario you update the drivers before the firmware.

    Then you can run the Array Configuration Utility (ACU) to create a new raidset and then a logical volume.

    From lrosa
  • You need to purchase a memory module for the RAID controller, as contrllers with no memory only allow to create 2 arrays :-)

    Arcath : where would i be able to get one?
    Fran Garcia : just contact any HP reseller (or if feeling brave, you can try ebay :-P)
  • The ACU should give you an option to create a RAID array, but you have to click on the controller to get that option, not on a physical drive.

    If you are still having problems, make sure your controller firmware and your ACU software are fully up-to-date. Newer versions of ACU also have a "Wizard" tab that can automate some of these configuration steps.

    From

Can I use Active Directory for user-level security in an Access application? Pretty Please?

My company makes fairly extensive use of an Access + MySQL application that would probably see some significant traffic on the Daily WTF if I posted the source code. The management of users and their permissions is getting out of hand, and I seem to spend more and more time dealing with tweaking these or trying to figure out why someone can't see what they're supposed to see.

It was originally set up to be used by three users in one warehouse. It's now used by over twenty users in four states, with more to be added soon, and the features have been added in a roughly 10-to-1 ratio with the users... The actual core application isn't bad, but managing users is a pain. Access makes a nice front end to the data itself, which is stored on a MySQL backend in our head office. Users have Cisco VPN boxes at satellite branches, and that's been solid as well. Scope has crept from a simple warehouse shipping record to a full-fledged CRM/ERP ...well, I don't suppose you could call this a solution. An emulsion, maybe. If I had the budget, I'd call up SAP and tell them to have at it. That, I'm afraid, is out of the realm of possibility for the foreseeable future.

Following instructions from Google (not always the safest thing to do) I used the 'User-Level Security Wizard' in Access to assign usernames and passwords to various users, which was fine when I started with 4-5 users total and 3 active users. But it's now quite unwieldy. My deepest wish and desire is that there would be some way to authenticate users and assign privilege roles based on Active Directory username and password. I'm told that's impossible. A few Google searches have turned up nothing of note.

I surmise that it should be possible to get some sort of authentication framework using Active Directory because VBA has links to all manner of APIs in Windows. However...is it worth the time and trouble? Has anyone ever gotten this to work, or am I liable to blow up not only my WTF-worthy application but the domain as well?

  • It's not possible to directly interface with AD at that level. Best you'd be able to do directly is assign file permissions based on AD accounts. It'd take a bit of effort to pull it off through VBA, but certainly not out of the reach of comprehension. I'd say you should do a pretty solid ROI analysis on it before tackling.

    David W. Fenton : This is wrong. Of course it's possible to get the info. from AD. It's also possible to get group membership with API calls. -1 for an incorrect answer.
    Wil : @David, but from within Access, it isn't the easiest thing to do (but as you said, not impossible) +1... also +1 to the answer simply because as incorrect as some of it is, I would personally use AD file permissions - however, I doubt this will help/solve the actual question simply because they need user/group restrictions within the file... so -1... net 0!
    squillman : @David: Like I said, it's possible if you code it. It's not possible to set it up through interfaces such as the User Level Security Wizard. I've coded many an AD interface for VBA... I know that's possible and this is what I stated.
    David W. Fenton : The original question mentions APIs for interfacing with AD, so I think the original question was not for a point-and-click or macro-based solution. Perhaps you could edit your answer to explicitly define "at that level," which seems to be the point of dispute here.
    From squillman
  • I know it's possible to do, but very few Access developers seem to be doing it. If somebody else wrote the code, I'd use it myself, but don't need it enough to write it myself.

    The key concept is that you can access AD information via an LDAP query using ADO. There's no way to enforce permissions on Access objects with that, but you could certainly control application flow/presentation based on AD membership. See this thread for a starting point. Also, there's an MS Knowledge Base article on this that explains the LDAP approach.

    BTW, as long as you don't need AD-specific functionality (such as organizational units), you don't need to use AD at all. You can use regular API calls to get group membership information. See this Stackoverflow post for some code suggesting the direction to go (I can't verify that code, as it looks rather elliptical, i.e., not API declarations, but it gives the basic concept).

    Tony Toews : As David might recall I'm one of the few. Yes, I do have all the necessary code working but it took me about three days to figure out what code I needed and another two days to code it.

SQL SERVER 2005 with Windows 7 Problems

First of all I restored the database from other server and now all the stored procedures are named as [azamsharp].[usp_getlatestposts]. I think [azamsharp] is prefixed since it was the user on the original server.

Now, on my local machine this does not run. I don't want the [azamsharp] prefix with all the stored procedures.

Also, when I right click on the Sproc I cannot even see the properties option. I am running the SQL SERVER 2005 on Windows 7.

UPDATE:

The weird thing is that if I access the production database from my machine I can see the properties option. So, there is really something wrong with Windows 7 security.

UPDATE 2:

When I ran the orphan users stored procedure it showed two users "azamsharp" and "dbo1". I fixed the "azamsharp" user but "dbo1" is not getting fixed. When I run the following script:

exec sp_change_users_login 'update_one', 'dbo1', 'dbo1' I get the following error:

Msg 15291, Level 16, State 1, Procedure sp_change_users_login, Line 131 Terminating this procedure. The Login name 'dbo1' is absent or invalid.

  • You probably have orphaned users. When you are accesing the server from your machine your domain credentials probably have access as DBadmin to the production server. Run this code to detect orphaned users:

    Use TestDB
    sp_change_users_login 'report'
    

    The output lists all the logins, which have a mismatch between the entries in the sysusers system table, of the TestDB database, and the sysxlogins system table in the master database. to fix the problem:

    Resolve Orphaned Users

    Use TestDB
    sp_change_users_login 'update_one', 'test', 'test' 
    
    SELECT sid FROM dbo.sysusers WHERE name = 'test'
    0x40FF09E48FBD3354B7833706FD2C61E4
    
    use master
    SELECT sid FROM dbo.sysxlogins WHERE name = 'test'
    0x40FF09E48FBD3354B7833706FD2C61E4
    

    This relinks the server login "test" with the the TestDB database user "test". The sp_change_users_login stored procedure can also perform an update of all orphaned users with the "auto_fix" parameter but this is not recommended because SQL Server attempts to match logins and users by name. For most cases this works; however, if the wrong login is associated with a user, a user may have incorrect permissions.

    : Thanks! I can see two users when I run the first command. then I run the second command like this Use mydatabase sp_change_users_login 'update_one', 'azamsharp', 'azamsharp' and get the following result: Terminating this procedure. The Login name 'azamsharp' is absent or invalid.
    : If I delete the two users from the database they come back again.
    mrdenny : To sync the user to a login you first have to create the login at the instance level.
    : The login with "azamsharp" is already created for the database. I can see it in the security => users. I have deleted dbo1 and not I need to map "azamsharp" to the correct login name. It says " The Login name 'azamsharp' is absent or invalid."
    : Somehow I managed to solve the orphan user problem but I still cannot run the stored procedure prefixed with "azamsharp". Like "azamsharp.getarticles". It says no stored procedure found.
    : Please see the updated post.
    From Jim B

SVN client too old after I used sshfs

I have a svn checked out web application on a shared hosting linux server. The linux server has an svn client that I can access via ssh.

From my localhost, I did the following

> sshfs myusername@sharedhostingserver.com:webappdir/ /media/webapp
> cd /media/webapp
> svn update
svn: Working copy '.' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
> svn clean up

It's been 15 minutes and the svn clean up still isn't finished. I think it may have froze. So then I did the following:

> ssh myusername@sharedhostingserver.com
> cd webappdir
> svn update
svn: This client is too old to work with working copy '.'; please get a newer Subversion client

So now I can't update my webappdir because my /media/webapp is stuck on svn clean up, and my shared hosting server's svn client is out of date. I don't have privileges to install a new svn client on the shared hosting server.

How do I get my svn update to work?

  • "The working copy format has been upgraded. This means that 1.4 and older Subversion clients will not be able to work with working copies produced by Subversion 1.5. Working copies are upgraded automatically."

    http://subversion.tigris.org/svn_1.5_releasenotes.html

    John : Does this mean I'll need to set up a separate checkout under a 1.5 version of svn, then do a manual merge of all files I've modified? I've modified so many files, and the repository has so many updates, doing a manual merge will take a long time. Any easier way around this?
    Alex Holst : Either upgrade the old svn client, so it can read the new working copy - or wipe the working copy and perform a new checkout.
    From Alex Holst
  • Apache provide a script that downgrades your working copy to an earlier format, in order to restore compatibility with older clients. See this FAQ entry: http://subversion.apache.org/faq.html#working-copy-format-change

SQL Server Driver for PHP under Windows 2000 Server SP4

I need to install a demo of a PHP application that makes use of the SQL Server Driver for PHP. The server is an old machine that runs Windows 2000 Server SP4. I've installed the whole stack without problems:

  • Apache 2.2.14
  • PHP 5.3.1
  • SQL Server 2005 Express Edition
  • SQL Server Management Studio Express

But there's a component that's not working as expected: the SQL Server Native Client. I get an error message as soon as I call sqlsrv_connect(). I've found four different releases and none works:

If I don't install it or I install the 2005 version:

  • SQLSTATE: IMSSP
  • code: -49
  • message: The SQL Server Driver for PHP requires the SQL Server 2008 Native Client ODBC Driver (SP1 or later) to communicate with SQL Server. That ODBC Driver is not currently installed. Access the following URL to download the SQL Server 2008 Native Client ODBC driver for x86: http://go.microsoft.com/fwlink/?LinkId=163712

If I install the 2008 version:

  • SQLSTATE: IM003
  • code: 160
  • message: Specified driver could not be loaded due to system error 127 (SQL Server Native Client 10.0).

The system requirements for SQL Server Driver for PHP 1.1 include Windows 2000 Service Pack 4. However, the system requirements for SQL Server 2008 Native Client mentions Windows Server 2003 Service Pack 2 or greater.

Any idea?

  • I don't think that the SQL 2008 Native Client is supported on Windows 2000. Do you have a Windows 2003 system you can install this application on?

    Álvaro G. Vicario : I'm afraid not. It's a Windows-only app and that's the only Windows box available.
    mrdenny : Sounds like you have hit a wall then. I would recommend a server OS that isn't 10 years old for your Windows server. Can you run a Windows VM under linux or what ever machines you have handy?
    Álvaro G. Vicario : My advice to the client was "get a newer machine". But, of course, finding a way to reuse an existing resource is always a bonus :)
    From mrdenny
  • I've got further information from Serban Iliescu in the official forum:

    Upon further investigation, I found that SQL Server Native Client 2008 (a.k.a. SNAC 10) will not load on Windows 2000 because some dll dependencies are not satisfied by the operating system. There is defect currently logged against SNAC 10 to resolve this issue by gracefully downgrading, but that defect is yet to be addressed. Meanwhile you can try the previous version of the PHP driver (i.e. version 1.0) that is linked to SNAC 9 (SQL Server Native Client 2005) and SNAC 9 is supposed to work on Windows 9.

    To sum up:

    1. SQL Server Driver for PHP 1.1 requires SQL Server Native Client 2008
    2. SQL Server Native Client 2008 has a known bug and won't load under Windows 2000
    3. SQL Server Native Client 2005 works fine under Windows 2000
    4. SQL Server Driver for PHP 1.0 only requires SQL Server Native Client 2005

    I installed the older version of the PHP driver together with the 2005 version of the Native Client. I also had to downgrade PHP from 5.3 to 5.2.

    My application seems to work with the old version of the PHP driver. So the demo is up and running at last.

  • Where can i get the v1.0 of the SQL server Driver for PHP? I am stuck with the same problem. Help! cant seem to find it anywhere

    From Dipo

My PHP Server is giving session errors all of a sudden

I am running Xampp which is a LAMP setup basicly but for windows. I have been using it for years now with no trouble and all of a sudden, all my sites pages that use PHP sessions are now giving errors like this...

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at C:\webserver\htdocs\friendproject2\labs\2.php:1) in C:\webserver\htdocs\friendproject2\labs\2.php on line 3

I realize that this happens from 1 of 2 things generally.

  1. If there is any whitespace printed to screen before session_start() function is called, that can sometimes cause this to happen
  2. If there is anything printed to the screen/browser before the session_start() is called.

Now my problem is different. Before tonight, I had hundreds of files that used sessions and none of them showed any of these errors. It is not just 1 file where I am overlooking a user error, this just started affecting all my files. I have not made any changes to my computer tonight or recently that I recall either.

What could be causing this? It is driving me insane and nobody seems to know why this started happening. I think it must be server related

I can even create a file and put it into any folder of my servers web and be a simple file like this bvelow and it will still give the error I show above....

<?PHP
session_start();

$_SESSION['test'] = 'test value';

echo $_SESSION['test'];
?>
  • Check if all your files are saved with UTF-8 encoding. UTF-8 encoded files may include a BOM (Byte Order Mark) to tell the difference between big endian/little endian byte order. PHP does not understand BOM and when it hits that in the begining of the file, it assumes it's dealing with data and sends it off - by then it's too late to modify headers.

    The solution would be to make sure you save your files as ANSI - configure your IDE/editor

    Hope it helps.

    EDIT:

    If this is the case, you probably have a lot of files that you need converting. You can use try using this bash shell code that uses iconv to do it for you (adapted from : http://stackoverflow.com/questions/1182037/osx-change-file-encoding-iconv-recursive)

    for files in /mydisk/myfolder/*.php
      do
        iconv -f UTF-8 -t ISO-8859-1 "$files" "${files%.php}"
    done
    
    From

SQL 2008 Database mirroring over WAN link with certificates

I am configuring database mirroring over a WAN link between two SQL 2008 named instances who's host servers are not domain members, using certificates for authentication. After many attempts to get this working by myself I have started from scratch and went step-by-step according to BOL http://technet.microsoft.com/en-us/library/ms191140.aspx, however the issue I was trying to resolve is still present.

The issue is on the set of final steps which sets the partner status on each server, when I perform step #2 to set the partner status on "HOST_A", I get the following error:

Msg 1418, Level 16, State 1, Line 2

The server network address "TCP://server-b.our-domain.com:5022" can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational.

The interesting thing however is that I can see traffic on the firewall (TCPDUMP) going back and forth between the two servers for about 15 seconds before that error gets spit back at me.

At this point I am not sure how to proceed because I can connect to the SERVER-A\BLUE instance from SSMS on SERVER-B and I can connect to the SERVER-B\RED instance from SSMS on SERVER-A without a problem. I am very confused why I am getting the error at this point in time. The endpoints on both sides are listed as started in sys.tcp_endpoints and sys.endpoints.

Another interesting note is that before attempting step 2, I can telnet from SERVER-A to SERVER-B over 5022 and from SERVER-B to SERVER-A over 5022, however after step 2 fails, I can no longer telnet either direction. TCPDUMP will show traffic going from either to the other, but there is no return traffic after step 2 fails.

The main issue for me is that this error seems to have the wrong description for whatever is actually happening since clearly the network address exists and can be reached and the endpoints are operational as well (at least until the operation fails [Rolleyes] ) I have also tried doing the config in the opposite direction (doing a full backup/restore with no recovery etc. going the other direction) and it fails the exact same way providing the same errors, but again with all of the traffic showing on the firewall.

Lastly, in SQL logs I also get the error "Error: 1443, Severity: 16, State: 2." Which seems to be directly related, and some of what I have found online suggests an issue with windows authentication, however that should not be the case since my endpoints are configured with certificates.

Any help with this would be greatly appreciated.

Here is the actual T-SQL used for setting this up, which follows what is in the BOL article.

--ON SERVER-A\BLUE
use master
go

create master key encryption by password = 'password123!'
go

create certificate CA_cert
        With subject = 'CA_cert Certificate'
go

create endpoint Mirroring
        STATE = STARTED
                AS TCP (
                        LISTENER_PORT=5022
                        , LISTENER_IP = ALL
                )
        FOR DATABASE_MIRRORING (
                AUTHENTICATION = CERTIFICATE CA_cert
                , ENCRYPTION = REQUIRED ALGORITHM AES
                , ROLE = ALL
        )
go

BACKUP CERTIFICATE CA_cert TO FILE = 'c:\sql\CA_cert.cer'
go


--ON SERVER-B\RED
use master
go

create master key encryption by password = 'password123!'
go

create certificate NJ_cert
        With subject = 'NJ_cert Certificate'
go

create endpoint Mirroring
        STATE = STARTED
                AS TCP (
                        LISTENER_PORT=5022
                        , LISTENER_IP = ALL
                )
        FOR DATABASE_MIRRORING (
                AUTHENTICATION = CERTIFICATE NJ_cert
                , ENCRYPTION = REQUIRED ALGORITHM AES
                , ROLE = ALL
        )
go

BACKUP CERTIFICATE NJ_cert TO FILE = 'c:\sql\NJ_cert.cer'
go


--ON SERVER-A\BLUE
create login NJ_login WITH PASSWORD = 'password123!'
go

CREATE USER NJ_user FOR LOGIN NJ_login
go

CREATE CERTIFICATE NJ_cert
        AUTHORIZATION NJ_user
        FROM FILE = 'C:\sql\NJ_cert.cer'
go

GRANT CONNECT ON ENDPOINT::Mirroring TO NJ_login
go


--ON SERVER-B\RED
create login CA_login WITH PASSWORD = 'password123!'
go

CREATE USER CA_user FOR LOGIN CA_login
go

CREATE CERTIFICATE CA_cert
        AUTHORIZATION CA_user
        FROM FILE = 'C:\sql\CA_cert.cer'
go

GRANT CONNECT ON ENDPOINT::Mirroring TO CA_login
go


--ON SERVER-B\RED
alter database testdb
        set partner = 'TCP://server-a.our-domain.com:5022'
go


--ON SERVER-A\BLUE
alter database testdb
        set partner = 'TCP://server-b.our-domain.com:5022'
go

-- Everything works fine up until this point at which time I get the previously mentioned errors
  • Attach Profiler to both instances (all three if there is a witness) and monitor the events Audit Database Mirroring Login Event Class and Broker:Connection Event Class.

    The error 1418 simply tells that within a specific timeout the mirroirng session was not up and running, for whatever reason. When you issue the ALTER DATABASE ... SET PARTNER = 'tcp://..' on the principal the principal will connect to the mirror and the mirror will connect to the principal in response. Which means that both the principal 'partner' value and the mirror 'partner' value, set previously, come into picture, and they both have to be correct and the the underlying infrastucture (routing, DNS, IPSEC, Firewalls) has to allw connection to the desired address:port from both partners. Throw in the witness if you have one and you got yourself a pretty complex hairball of TCP connectivity that has to be verified.

    If the issue is certificate security, then the Audit Database Mirroring Login event will clearly state the cause and problem (certificate not valid, expired, bad certificate used etc etc). If the issue is the udnerlying TCP fabric (routing, DNS, IPSEC, firewall etc) then the Broker:Connection event will actually show the problem.

    If you want to understand exactly how does the certificate based authentication works, read on at How does Certificate based Authentication work.

    tnolan : Thank you very much for the response... I am looking into this now and will let you know if it works...
    tnolan : Ok so with auditing those two event classes, I get Negotiate Failure then Login Protocol Failure in database mirroring logins, but nothing from broker connections. This happens 10 times on both servers, then the query from SERVER-A fails out. From BOL(http://msdn.microsoft.com/en-us/library/ms190746.aspx), it looks like the protocol error is a result of the negotiate failure. Problem is I don't understand how their auth methods could be mutually exclusive (refer to BOL article). They are the same on both sides given my code which follows BOL instructions to the T.
    tnolan : I just blew out everything and started from scratch... by doing this on both sides... drop endpoint Mirroring; alter authorization on certificate::NJ_cert TO [dbo]; drop user NJ_user; drop login NJ_login; alter authorization on certificate::CA_cert TO [dbo]; drop user CA_user; drop login CA_login; drop certificate NJ_cert; drop certificate CA_cert; drop master key; Turns out, the CA side (SERVER-A) was missing the NJ(SERVER-B) user somehow... thank you very much for cluing me into the events to trace, that completely solved my problem... PEBKAC lol

How can I set full premissions to a user in a specified dir?

How can I set full permissions to a user in a specified dir in linux?

  • You can give the user ownership with the following command:
    chown -R username:groupname directory

    Permissions are controlled with chmod but more than likely if you give the user ownership the permissions should already be set to give them full access.

    From einstiien
  • Depends what you mean 'full permissions'. If you want a user to have full read and write access to all files and directories in that directory, then this will help:

    chown -R username directory
    chmod -R u+rX directory
    

    The first command makes the user own the directory. The second command gives them full read and access permissions. The r gives read permission, the X gives 'execute' permission to directories, and not files.

    einstiien : The problem with setting the permission that way is you make every file executable which may not necessarily be a good idea. Generally speaking in less you know what files you're dealing with(or you just don't care) I wouldn't apply permissions to a whole directory tree this way.
    Rory McCann : Nope, that doesn't set all files executable, it will only set the directories 'executable'. That's the difference between x and X.
    einstiien : Sorry, didn't see the capital.
  • The two solutions previous to my comment assume that you only want a SINGLE person to have full access to a directory and its sub-directories and files below it.

    Is that correct or do you want MULTIPLE people to have full access to that specific directory?

    From mdpc
  • If you do not wish to change the existing permissions of the directory, yet would like to give a user (or multiple users or groups) permissions to the contents of the directory, you can use ACLs. Some filesystems (ext3) require the acl flag on mount to enable ACLs. Often, just using groups is sufficient, but ACLs can be more flexible.

    Look at the setfacl and getfacl commands for more information.

Free SQL Server tools

Anyone have an update to date list of Free administrative Tools for SQL Server. There is a forum post on SQLServerCentral but its pretty outdated.

Please provide a link as some vendors do a pretty good job of hiding the free stuff.

  • It's not quite how I'd design a SQL UI, but it's reasonable and open source. And it now builds on win32!

    Package: mergeant
    Description: GNOME Database admin tool GUI for GNOME2
     Mergeant is a program which helps administer a DBMS database using the
     gnome-db framework. Basically, it memorizes all the structure of the database,
     and some queries, and does the SQL queries instead of the user (not having to
     type all over again those SQL commands, although it is still possible to do
     so).
    
    From jldugger
  • Web Based Enterprise Manager, haven't used it in a long time.

    http://sourceforge.net/projects/vwg-ent-man

    From Chad Grant
  • Toad which is used to manage various SQL databases has a freeware version.

    Chris Marisic : Yuck, toad is one of the worst pieces of software I've ever used and I had the enterprise version that was like $1,000+ a license and it was still horrible. I can't imagine how bad the freeware version is.
  • SQL Server Management Express is slick and a great free tool for my limited MS SQL needs.

    From Mark Nold
  • Query Express is only 100kb and doesn't require an install. It works with ms sql, oracle and other oledb.

  • Instead of listing them all here, here is a link which contains a lot of free tools and other resources (SQL Server Management Studio Add-ins; SQL injection tools; Administration; best practices, analysis, health and performance, Database Publishing to hosted servers, Update and Migration; SQL Server Analysis Services; SQL Server Integration Services; SQL Server 2005 BI Development Studio (BIDS); Code formatting ...):

    Free SQL Server tools that might make your life a little easier from the sqlteam.com blog.

    Maybe you can find something useful in my Delicious links for sqlserver+freeware too.

    From splattne
  • SQL Internals Viewer - Allows you to browse

    SQL Server Fine Build - Best Practices one-click installer tool

    SQL IO GUI - GUI Tool for SQIO

    DMV Stats - Collect & analyze DMV data

    SQL Server Web Data Administrator - Perfect for admin' SQL Server on shared hosting

  • There are some new tools available from my company, Atlantis Interactive, all either free or with free editions.

    An IDE with code completion, schema comparison tool and space visualisation tool: Atlantis web site

    Edit: @squillman - thanks for the heads up. I can't comment yet, but I've just spent ages working on these, and just trying to make sure they get included in lists of free tools, seeing as I'm giving them away! :)

    squillman : Just a heads up, someone flagged this as spam (which generates a downvote). Generally it's not good practice here to advertise your company, that's my guess as to the reason for the flag.
    John Gardeniers : At least he's not trying to hide the connection, unlike a few others on this site.
  • Redgate just recently released a really cool (free) plugin to SSMS called SQL Server Search, currently in Beta. It does a keyword search for any object in your database or server, and is fast.

    There's a really cool story behind the development of this tool. Robert Chipperfield blogged about it. Great read.

    From squillman

Web browsing is fast, but downloads are slow

I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help.

All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment.

So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less.

It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely.

These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens.

From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network.

Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms.

Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question)

Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem?

P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines.


Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on.

Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps and no more dropped downloads. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line, and we also upgraded all of our switches to gigabit. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

  • Since the firmware update seemed to help temporarily and then went away over time it may not have been the firmware update itself but the fact that the firewall was rebooted in the process.

    I would test a download, power cycle the firewall, test again.

    If power cycling temporarily remedies your issue, then you have your smoking gun against the firewall.

    One explanation I could think of would be a college network is going to be riddled with junk, all those students with poorly secured systems riddled with spyware sending all kinds of traffic around. The firewall may just be getting overloaded from all of that and the power cycle clears it up temporarily.

    It could also just in fact be a faulty unit.

    Since firmware was just updated, it either isn't software or it is still an open issue and so probably won't be seeing an instant remedy that way.

    What kind of firewall is this? If it is a simple SOHO router, just try tossing another in place and see how it acts, can very simply swap the other back.

    Ricket : I'm not so sure though. Sure, the reboot seems to pin the firewall, but let's say for example that the bottleneck is a giant bittorrent stream with hundreds of connected peers. After a firewall reboot, all the connections are closed, things are fast, but then over time the torrent software broadcasts itself to trackers again, peers reconnect and everything is gradually slowed down again. The firewall would have nothing to do with something that. It's the kind of thing that spyware might do, right?
    ManiacZX : True, that goes back to my suggestion of swapping firewalls as a test. Also, if it is more than a simple SOHO router, you should be able to see at least the total bandwidth used and possibly per host how much bandwidth they are using to show if it is purely your traffice being eaten up.
    ManiacZX : Also, if you are that concerned about spyware, run a scan on everyone's computers with something like MalwareBytes. Keeping active AV on computers is fine, but it still lets stuff slip through that dedicated ones like MalwareBytes seem to destroy with no effort.
    ManiacZX : Another option if you really want to be definitive, shut down all the workstations in your network except one you know should be clean and do your testing.
    Ricket : On Friday before leaving work I wrote a set of scripts that would distribute and run HijackThis on every computer in the office and save logfiles into a network folder. Unfortunately, all of our computers are laptops, so at 5pm on Friday they are all either shut down or taken home... I plan to run the script sometime tomorrow (Monday) and then analyze each logfile and see what comes up. Do you know of a better anti-virus/spyware program that can run silently and report into a logfile?
    From ManiacZX
  • What does the Campus IT department say? Some areas on the local campus here are bandwidth throttled based on what the traffic is. Torrents and streaming audio/video get throttled heavily. Internal traffic is fast unless it is student A streaming to Student B. Traffic is shaped based on content or the area you are in. Engineering prof's lab= lots of bandwidth. Dorm room Not so much. Perhaps your area is receiving less bandwidth from the campus network.

    Ricket : I suppose it is worth asking if there's anything they can do to help diagnose the problem, but I believe the campus doesn't do any shaping because of our gigantic bandwidth. People host servers in their dorm rooms because the dorm internet is the same as everywhere else. In addition, the symptoms going away after the firewall reboot suggest something more like ManiacZX talked about in his answer.
    Dave M : What is the make/model of firewall on your segment of the network?
    From Dave M
  • I bet there's a firewall that captures downloads to virus scan them before they are delivered to you. I've seen this situation where downloads will seem to stall, and then after a few minutes the entire thing will come down all at once, but what was happening was, the download was going to the AV box, virus scanned, and then (if clean) sent to the requester all at once.

    Ricket : No, I don't think this is it. For the most part, downloads will come in at an extremely slow rate - around 10kB/s or less. But often, partway through, it slows down until it just reaches 0 and times out. I can double check but I'm pretty sure our firewall appliance does no antivirus scanning.
  • We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it (the campus connection, and the firewall), nothing else - so it was completely pointless and throttling the connection to half-duplex. After removing the hub, our speeds have jumped up to several mbps and no more dropped downloads.

    However, in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line, and we also upgraded all of our switches to gigabit. As of this past Friday 4/16, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    From Ricket

stable, recent, free single system image solution for linux

I've started looking into creating a load balanced virtual server, for running mostly web services, project management services (version control, etc..), and applications of that sort. And I need an open source (Linux) solution.

Wikipedia has this entry, there are seemingly very promising stable projects, but most are long dead.LVS and Kerrighed look possible, but I am not sure. Are they worth investing (timewise)?

What would be a good solution? (although I can not afford a commercial solution (Linux or otherwise), I would like to learn about these alternatives, and appreciate comments to that end).

thx

  • I'm not sure I'm answering what you're actually asking, but if you're looking for a way to take a VM and mirror it, you can use any of the free virtualization tools I'm aware of (VMware Server, ESXi, kvm, etc)

    • make your VM with everything you need on it
    • copy the VM
    • make changes to the copy (ip address and hostname)
    • start both VMs
    • insert load-balancer (hardware or software, doesn't matter)
    • .. no 6th step I can think of :)
    sly : I am not sure if you are answering what I am asking, either. But it sure doesn't sound like it. I am looking for a combination of hardware (cluster) acting (virtually) as a single server with a lot of memory and many CPUs. Or perhaps one level below, support load-balanced process migration (as in mosix), and such.
    From warren
  • It looks like you are trying to solve the problem at the wrong layer. I don't know any sane system administration that would try to use Single System Image to run a web server when there are other methods such as reverse proxies that are much simpler and as a result more reliable.

    Such As:

    sly : thanks for reading my mind. you're right, I am asking the wrong question. Although, not knowing much here, I am not sure how database coherency (wrong terminology here probably, but still..) is achieved with this. i.e can a database (or a single schema in a say in MySQL) be distributed, and load-balanced, and coherent? sry, can't find a better way to ask this, hope you see where I am getting at..
    niXar : @sly: edit your question then. And there are ways to distribute data, but it's not an easy problem. Take a look at drbd.
    sly : ok. I'll post another perhaps. but let this question stand on its own: I wonder, even if it is insane, whether there is anyone using SSI for load-balancing (and NOT running HPC apps), and which of these systems would be worth looking at.
    3dinfluence : It's certainly more common with HPC. But I'm sure there are plenty of web apps being run on big iron, such as some of the large IBM Z series servers/mainframes, in a container or VM of some sort. This would be pretty similar to many normal servers in a SSI cluster.
  • Unless I'm really reading the question wrong, I'd say for hosting web applications, you're going about it the wrong way.

    I'd suggest having multiple nodes (virtual, or physical) and managing their config with puppet.

    Your nodes could be a whole rack of 1U servers, or a bunch of powerful 3U multiprocessor servers running KVM and then an OS of your choice as virtualization guests.

    Given 4 servers you could set them up as follows:

    • Server 1: Load balancer + HTTP Node (running Varnish and Apache)
    • Server 2: Load balancer + HTTP Node (running Varnish and Apache)
    • Server 3: HTTP Node + DB Master(running Apache and MySQL)
    • Server 4: HTTP Node + DB Slave(running Apache and MySQL)

    It would be advantageous to have a fifth server, that runs services, such as nagios, munin, tftpd for a PXE boot environment, a small HTTP server for kickstart/preseed files, a DHCPd, maybe serial consoles via a Rocketport or similar.

    The massive advantage of using Puppet to deploy your own systems, instead of having a single image, is that the resources are effectively self-documenting. It's a lot clearer, and less black-box than just having an image you drop onto servers. Plus it makes updates and changes to the image far simpler.

    sly : this is a good answer too, but it seems I can only pick one.
  • As exciting as SSI sounds, they're quite unlikely to perform optimally.

    Since your main target are web apps, you can (should!) use the current best practices. Typically, these start by:

    • a caching load balancer as frontend (squid, varnish, nginx)
    • several HTTP servers for web apps (typically apache, might be nginx+FastCGI, whatever)
    • database

    if well done, your first bottleneck would be the database, at this point, you should:

    • add cache to your web apps, to reduce DB hits to a minimum. (modern frameworks (RoR, Django), include great support for memcached)
    • take some kind of jobs from the DB to more specialized apps. first candidates are task queues (to rabbitMQ or similar) and key/value stores (to tokyo cabinet, resis, mongoDB, lots of them)
    • distribute the DB. if it's many reads/few writes, try master/slave replication (easy on MySQL), but if this is your case, memcached should've absorbed most of the load already. Also try sharding.

    if you ever get to overgrow this (are you FaceBook?), you'll have to rethink your whole structure, à la Google (where they do almost everything 'off the line' with MapReduce).

    From Javier

VM Ware Pricing Confusion

We are considering purchasing a VMWare bundle, the VMWare vSphere 4 Essentials Bundle for 3 hosts (one can buy it online http://store.vmware.com/store/vmware/en_US/DisplayProductDetailsPage/productID.126843700)

For several days, I have been trying my dardest to answer what, to me anyway, appears to be a simple question: If, at some point in the future, we need to install ESXi on a 4th host, and want to manage that hose with VCenter, what fees will we need to pay to do so?

I have tried calling VMWare sales -- they don't return calls apparantly. I tried emailing VMWare sales -- no response in over 3 days.

I have looked over their web site but can't find a concrete answer.

It might be that one simply pays $795 per processor to license additional VSphere hosts at the essentials level. I seriously doubt that this is the case because I believe that the VCenter license that comes with the essential bundle is permanently limited to 3 hosts. This leads to another interesting question: Will the VSphere licenses that come with the bundle work with a VCenter licensed later on?

I have a sneaking suspicion that the answer is that there is no upgrade path. If/When you want to manage 4 hosts, you need to re license everything (VCenter and the VSphere hosts).

Does anyone out there know the answer to this question? Furthermore, does anyone out there know a way to get quick answers out of VMWare over this sort of thing (a nice live chat or something?)

  • I believe you are correct that you will have to re-license the VCenter Server. The next jump up is 16 hosts.
    ESXi is free, so you won't have to purchase anything there, though you won't have support unless you buy it for each host.
    As others have suggested, a VAR is your best bet. CDW has knowledgeable people in this area. VMWare sales people have never called me back either, apparently they are trying to gain Oracle's reputation... :-)

    MarkM : Not sure where you're saying next jump is 16 hosts. Full vSphere licensing is done per-processor.
    Scott Lundberg : VCenter is by host though...
    Brent Ozar : He's right about re-licensing the vCenter server.

svn servers auto synchronise

I have a svn server on our lan locally its on windows. The developers use and check in/out from that. Just to be on the safer side we have took up a server from rackspace a linux one. Is it possible to do an automatic weekly synchronise from the local svn server to the remote one. The remote one will be mainly used as a remote backup but just in case if somebody wants to access then they can do as there is no static or external IP for our lan.

  • You can easily rsync the files in one direction, but nobody must use the repository at the other end.

    There would be no way of reconciling merge issues - you'd end up having two revisions with the same rev number and different content committed - it would be havoc.

    Either use a distributed VC such as Bazaar, or treat the offsite backup as strictly a backup - don't write to it.

    Distributed systems typically allow the developer to work on a branch hosted locally (which they can commit changes to as normal) and subsequently merge changes down/up to a "higher" repository. History of the changes is of course, maintained.

    zapping : Then i guess the changes from the remote SVN will need to be added to the lan first and then the auto weekly sync will take care of the rest.
    RichieACC : Using rsync on an SVN repository is a really, really bad idea. If a file were to change in the middle of a sync it could corrupt the destination repository rendering it useless.
    MarkR : In which case take a consistent filesystem snapshot and rsync that. I'd assumed that the backup would be taken out of hours when commits were unlikely; this is not going to work if there are lots of developers in multiple time zones working on it.
    From MarkR
  • You can have a post-commit command updating automatically your backup repository. That way, you have real time backup.

    Alternatively, you can set up a cron job to update your backup repository at periods you like.

    From Veynom
  • There is a svn tool that will keep two svn servers in sync, sorry I can remember it’s same etc. It makes use of the same protocol as the svn client uses to talk to the server.

    You are likely to find a lot more svn users on stackoverflow

  • Just to add on Veynom's answer, you could use the built-in svnsync tool to create a consistent backup of your local repository. Using tools like rsync are not recommended as you can rsync an inconsistent version which might be unusable when you really need it.

    RichieACC : You can set up a scheduled task in Windows to run svnsync at a regular interval. You can even sync to multiple locations at different intervals. When setting up the sync though, check to make sure that all the repositories have the same guid. This makes switching to one of them easier when you need to, otherwise things get messy. Also make sure that the backup repositories are exactly that, backups. Nobody should be writing to them at all. They're to be used only when your primary repo fails.
    From katriel

Google Apps for Domains, Multiple Domains

I have a primary google apps for domains account which I use for my personal email, calender, docs etc and is great.

I also receive my pop3 company email via settings->Get mail from other accounts in my account.

Due to spam I want to make use of gmail servers for my company email and have two options:

[1] Add my second domain as a domain alias [2] Create a new apps for domains account

If I do [1] above do I access (send and receive) my company email as if it was a separate account or is it merged into my primary domain. I want the two seperated.

If I perform [2] can I share my contacts / calender between the two?

I also have Act! contact manager which syncs to my primary domain and it is getting messy now with personal and work contacts being changed / sync'd to my Act CM software. I want to try and separate my personal and work contacts (but make the work them avaiable in my primary domain).

Hope this makes sense!

Your suggestions are gratefully accepted.

Thank you

  • To answer the parts of your question I can:

    If you put the two emails on one apps account then you'll have one inbox and all the messages will come into it. Though you can use filters to separate them into different labels.

    If you use separate accounts, then sharing your calendar is pretty simple, but contacts are a pain. I've been trying to do this for a while with a google apps account and a gmail account and haven't been able to get my contacts automatically syncing well.

    belliez : The problem with sharing an inbox is when you send from the 2nd domain the recipient receives the From domain2.com on behalf of domain1.com which I dont really want.
    Tom Kiley : Ya - you could make 2 apps accounts and then forward one of them to the other. Then you can set it up so it uses an external smtp server. The external smtp server would just be gmail again, but authenticated with the other email address. The end effect is that email2 would get forwarded to email1, so the messages would appear in both email1 and email2 inboxes. When you sent an email from email2 via your email1 interface, it would use an "external" smtp server so it will appear to only come from email2. It would also get saved in email2's sent items. I hope that makes some sense.
    belliez : yeah it does.. I have decided to create a new apps for domains account and sync the contacts / share the calender
    From Tom Kiley