Friday, April 29, 2011

Asp.Net LinkButton Onclick = method( container.dataitem ), need help with syntax

I have a linkbutton that I want to call a method in the code behind. The method takes a parameter of which I need to stick in a container.dataitem. I know the container.dataitem syntax is correct because I use it in other controls. What I don't know is how to user it a parameter for a method. Upon clicking on the button, the method should be called with the container.dataitem. The method is called 'AddFriend(string username)' Below is code. Thank you!

<asp:LinkButton ID="lbAddFriend" runat="server" OnClick='<%# "AddFriend(" +((System.Data.DataRowView)Container.DataItem)["UserName"]+ ")" %>' Text="AddFriend"></asp:LinkButton></td>
From stackoverflow
  • You need to use a ButtonField and handle the click in RowCommand. Check the MSDN docs

     <asp:buttonfield buttontype="Link" 
                      commandname="Add" 
                      text="Add"/>
    

    And in the code behind...

      void ContactsGridView_RowCommand(Object sender, GridViewCommandEventArgs e)
      {
        if(e.CommandName=="Add")
        {
             AddFriend(DataBinder.Eval(Container.DataItem, "Price""UserName"));
        }
      }
    
    : The linkbutton is in a datalist.
  • I think the same thing applies to a data list, but I've been using this for a repeater in my code behind. Mayby use DataListItemEventArgs and DataListCommandEventArgs in place of the Repeater.

    protected void rptUserInfo_Data(object sender, RepeaterItemEventArgs e)
    {
        if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem)
        {
            UserInfo oUserInfo = e.Item.DataItem as UserInfo;
    
            LinkButton hlUser = e.Item.FindControl("hlUser") as LinkButton;
            hlUser.Text = oUserInfo.Name;
            hlUser.CommandArgument = oUserInfo.UserID + ";" + oUserInfo.uName;
            hlUser.CommandName = "User";
        }
    }
    public void UserArtItem_Command(Object sende, RepeaterCommandEventArgs e)
    {
        if (e.CommandName == "User")
        {
            string command = e.CommandArgument.ToString();
            string[] split = command.Split(new Char[] { ';' });
    
            Session["ArtUserId"] = split[0];
            Session["ArtUserName"] = split[1];
            Response.Redirect("~/Author/" + split[1]);
        }
    }
    
  • Maybe this?

    <asp:LinkButton ID="lbAddFriend" runat="server"
     Text="Add Friend" OnCommand="AddFriend"
     CommandArgument='<%# Eval("UserName").ToString() %>' />
    

    Then in the code:

    Protected Sub AddFriend(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.CommandEventArgs)
        Dim UserName As String = e.CommandArgument
        'Rest of code
    End Sub
    

visual studio team suite : How to web test programmatically?

Good day,

I'm new to Visual Studio 2005 Team Suite web testing. Here's the action i'm trying to achieve.

On a webpage, I have a dropdownlist that is populated from a simple database table. In my tests, I want to click on each items of the dynamically populated dropdownlist, and after the postback, check if a label is visible on the page, which depends of the selected item in the dropdownlist.

However, since the dropdownlist is dynamically generated, I cannot simply 'record' and manually click on each item of the dropdownlist, so I have to code the test.

I'm pretty sure this is really simple to achieve, so I was wondering if there is a good free tutorial on web testing that I could use.

Thank you very much!

From stackoverflow
  • As long as it is fairly standard web-form postbacks you can use NUnitAsp:

    http://nunitasp.sourceforge.net/

    Here is a decent article on it here: ServerSide NUnitASP

    NUnitAsp is no longer really active or supported, but it still works good enough for simple form testing.

    YordanGeorgiev : It seems not be supported and developed anymore : http://nunitasp.sourceforge.net/
  • You question.

    However, since the dropdownlist is dynamically generated, I cannot simply 'record' and manually click on each item of the dropdownlist, so I have to code the test.

    It seems to me that you need add "Data Binding" to your test.

    Please read through this MSDN article.

    HTH.

  • Interesting. I will also try that. Thanks for all the suggestions

Splitting up a utility DLL into smaller components in C++

We have a core library in the form of a DLL that is used by more than one client application. It has gotten somewhat bloated and some applications need only a tiny fraction of the functionality that this DLL provides. So now we're looking at dividing this behemoth into smaller components.

My question is this: Can anyone recommend a path to take to divide this bloated DLL into a set of modules that have some interdepencies but do not necessarily require all other modules?

Here are the options as I see them but I'm hoping someone can offer other possibilities:

  1. Create a "core" dll and several "satellite" dlls which use the core and possibly other satellite DLLs.
  2. Subdivide the contents of the bloated DLL into static libraries that the main DLL uses (to maintain the same functionality) but apps that don't want to use the bloated version can assemble the static libraries they need into their own dll or into the app itself.

I was hesitant to mention this but I think it may be important to note that the app uses MFC.

Thanks for your thoughts.

From stackoverflow
  • Somewhat related to your question is this question, about splitting up a very large C module into smaller ones.

    http://stackoverflow.com/questions/748503/how-do-you-introduce-unit-testing-into-a-large-legacy-c-c-codebase

    It seems your question has to do with the larger question of breaking some large blob of code into a more modular system. The link above is definitely recommended reading.

    Karim : That is definitely part of it, but it has a lot to do also with the intricacies of dependencies and how difficult it would be to maintain this in a DLL environment vs. a static library. There are a lot of unknowns and I was hoping someone could guide me.
  • Without having all the details it is a little hard to help but here is what I would do in your situation

    • provide both static and dll versions of whate3ver you release - for MT and single threaded.
    • try to glean from the disparate clients which items should be grouped together to provide reasonable segmentation - without having layers of dependencies.

    having a "core" module sounds like a good idea - and make sure you don't have too many levels of dependencies - you might want to keep it simple.

    You may find after the exercise that one big dll is actually reasonable.

    Another consideration is that maintaining multiple DLLs and both static libs and DLLs will hugely increase the complexity of maintenance.

    Are you going to be releasing them all at once every time, or are they going to be mix and match? Be careful here - and know that you could create testing issues

    If no one is complaining about the size of the DLL then you might want to consider leaving it as is.

    Karim : I'm inclined to agree with your last point. There is a particular client who wants a customized version of the app that does less but is also much smaller. I would *much* rather talk them out of that that go to immense trouble of pursuing this.

PHP: "Global" Include

Current situation:

  • I have the current version of my MVC Framework which uses classes as controllers.
  • I have some "vintage" modules from my old MVC Framework which uses simple, flat includes as controllers.

Much simplified that means:

New Version:

<?PHP
class blaController extends baseController {
    private $intVar;

    function dosomethingFunction() {
        $this->intVar = 123;
        $this->view('myView');
    }
}
?>

Old Version:

<?PHP
$globalVar = 123;
// view "controllername" is automatically shown
?>

I'm now trying to write a wrapper to be able to use my old controllers in my new MVC without having to rewrite everything. To do so, I have a "wrapper" controller:

class wrapController extends baseController {
    function dosomethingFunction() {
        require 'old_dosomething.function.php';
        $this->view('old_dosomething_view');
    }
}

(Once again: This is VERY, VERY simplified - just to get the idea over. Not actual code.)

The problem with that approach is, that the previously global variable $globalVar now only exists inside of the method "dosomethingFunction" and cannot be accessed by the view.

This wouldn't be the case if I could force the require to behave as "in global scope" so that $globalVar would once again be available in global scope.

So: Is there some way to achieve "require_global" or something similar?

(One solution for my problem would be to modify my old controllers to start with a bunch of "global" commands, but I'd prefer a solution where I don't have to change so much of that old code.)

(Note: Please don't tell me that GLOBALS are bad. It totally misses the point of this question. Just accept that it is a requirement to keep some old code working in a newer, cleaner environment.)

From stackoverflow
  • Hmmm, this is an issue I've never before seen. I suppose you could do this

    class wrapController extends baseController {
        function dosomethingFunction() {
            require 'old_dosomething.function.php';
    
            // Force "old" globals into global scope
            $GLOBALS['globalVar'] = $globalVar;
    
            $this->view('old_dosomething_view');
        }
    }
    

    But that's a pretty tedious, manual process as well, depending on how many globals we're talking about. I'll think about this, but I don't know of any "auto-magic" solution off the top of my head.

    BlaM : "$GLOBALS['globalVar'] = $globalVar;" would be an option if I could iterate through all local variables, but is there an "automated" way to find all of those?
  • You can add local variables defined within dosomethingFunction() to global scope:

    class wrapController extends baseController {
        function dosomethingFunction() {
            require 'old_dosomething.function.php';
            //begin added code  
            $vararr = get_defined_vars();
            foreach($vararr as $varName => $varValue) 
                  $GLOBALS[$varName] = $varValue;            
            //end added code          
            $this->view('old_dosomething_view');
        }
    }
    

    Note, that for this to work as expected, you should call require before using any other thing in the function. get_defined_vars() returns only variables from the current scope, so no array_diff hacks are needed.

    BlaM : I'm not yet sure why, but somehow array_merge doesn't work in my scenario while merging the array "manually" with foreach works.
    vartec : well, $GLOBAL is not really "normal" array. Rolled back to foreach version.
  • This is the easiest solution I can think of.

    Use the get_defined_vars() function twice and get a diff of each call to determine what variables were introduced by the required file.

    Example:

    $__defined_vars       = get_defined_vars();
    require('old_dosomething.function.php');
    $__newly_defined_vars = array_diff_assoc($__defined_vars, get_defined_vars());
    $GLOBALS = array_merge($GLOBALS, $__newly_defined_vars);
    $this->view('old_dosomething_view');
    
    vartec : no need to do array_diff if you're calling that from within a function. get_defined_vars() only returns variables within the current scope.
    Matt : Good point. Though it's certainly possible that someone later adds variables to that function.
    BlaM : array_diff_assoc fails if one of the variables is an object that cannot be converted to string.
    vartec : @mcreenan: but if you call get_defined_vars() *before* more variables are added, they you'll get only the ones from included file.
  • Hi

    Have you tried Zend_Registry from Zend Framework?

    The registry is a container for storing objects and values in the application space. By storing the value in the registry, the same object is always available throughout your application. This mechanism is an alternative to using global storage.

    http://framework.zend.com/manual/en/zend.registry.html

    BlaM : That would require modifying the old code which is just what I try to avoid.
    BlaM : ... and I don't see how Zend_Registry is better than a simple global array to store stuff...
  • For anybody interested: My (so far) final version:

    class wrapController extends baseController {
        function dosomethingFunction() {
            // ... do some initialisation stuff ...
    
            $__defined_vars = array_keys(get_defined_vars());
    
            require 'old_dosomething.function.php';
    
            $__newly_defined_vars = array_diff(
                                        array_keys(get_defined_vars()),
                                        $__defined_vars, 
                                        array('__defined_vars')
                                    );
            foreach ($__newly_defined_vars as $var) {
                $GLOBALS[$var] = &$$var;
            }
    
            $this->view('old_dosomething_view');
        }
    }
    

    Ugly, but it works. Thanks for all your great help!

Cannot insert string into MySQL text column

For some reason, my queries screw up when I write to a column of type "text". Here is an example:

Describe messages;

Format is: Field Type Null Key Default

id  int(11) NO PRI NULL auto_increment
title   varchar(255) YES  NULL 
body    text YES  NULL 
to  text YES  NULL 
content_type    varchar(255) YES  NULL 
is_sms  tinyint(1) YES  NULL 
user_id int(11) YES  NULL 
created_at  datetime YES  NULL 
updated_at  datetime YES  NULL

Then I try an insert:

INSERT INTO messages (id,title,body,to) VALUES ('1','Test Message','This is a test message. This is a test message. This is a test message. This is a test message.', 'an email' );

For some reason this causes a general MySQL syntax error. The query works fine if I remove the "to" column and it's corresponding value from the query.

Any ideas?

From stackoverflow
  • 'to' is a reserved keyword in MySQL. You'll need to rename your column.

    http://dev.mysql.com/doc/refman/5.1/en/reserved-words.html

    However, Reserved words are permitted as identifiers if you quote them.

    http://dev.mysql.com/doc/refman/5.1/en/identifiers.html

  • INSERT
    INTO     messages (id,title,body,`to`)
    VALUES   ('1','Test Message','This is a test message. This is a test message. This is a test message. This is a test message.', 'an email' );
    
  • Try this instead

    INSERT INTO messages (`id`,`title`,`body`,`to`) 
       VALUES ('1','Test Message','This is a test message. 
       This is a test message. This is a test message. This is a test message.', 
       'an email' );
    
  • I believe if you surround the "to" with backtics like so:

    INSERT INTO messages (id,title,body,`to`) VALUES ('1','Test Message','This is a test message. This is a test message. This is a test message. This is a test message.', 'an email' );
    

    it will work - did for me anyway.

java Swing debugging headaches with Wacom pen tablet

I've been running up against a problem with Java Swing + my Wacom Graphire tablet for a few years in several Java applications and have now encountered it in my own.

I use a pen tablet to get around wrist issues while clicking a mouse, and it works fine under Windows except when I'm using Java applications. In Java applications, the single-click of the pen doesn't work correctly. (Usually the problem only occurs with file-selection dialog boxes or tree controls.) The pen tablet also comes with a wireless mouse that works with the same tablet, and its single-click does work correctly.

I don't know whether the problem is in the WACOM driver or in the Java Swing runtime for Windows or both. Has anyone encountered this before? I'd like to file a bug report with WACOM but I have no idea what to tell them.

I have been able to reproduce this in my own application that has a JEditorPane with an HTML document that I've added a HyperlinkListener to. I get HyperlinkEvent.ACTIVATED events on every single click with the mouse, but I do NOT get HyperlinkEvent.ACTIVATED events on every single click with the pen.

One big difference between a pen and a mouse is that when you click a button on a mouse, it's really easy to cause the button-click without mouse movement. On the pen tablet it is very hard to do this, and that seems to correlate with the lack of HyperlinkEvent.ACTIVATED events -- if I am very careful not to move the pen position when I tap the tablet, I think I can get ACTIVATED events.

Any suggestions for things to try so I can give WACOM some good information on this bug? It's really frustrating to not be able to use my pen with Java apps, especially since the pen works fine with "regular" Windows (non-Java) applications.

Normally I wouldn't ask this question here but I'd like to find out from a programmer's standpoint what might be going on so I can file a good bug report.

From stackoverflow
  • What you should do is add a mouseListener and see when it registers a mouseClicked(), mousePressed(), mouseReleased() event. I'm not sure if the swing reads the tablet pen as a mouse though. However, it should give you some insight into what's actually going on.

    Jason S : Great! I haven't used mouseListener before but it worked like a charm.
  • I think you already got the answer yourself: Moving the pen results in some other event than a simple click, perhaps maybe a Drag and drop like event. I'm not sure whether it's a Java/Swing or a Wacom problem, it could be that the tablet doesn't register the clicks as such but as drag events, or it could be that swing interprets the events incorrectly.

  • I tried dr.manhattan's suggestion and it works like a charm. I get mousePressed/mouseReleased events correctly; mouseClicked events happen always with the pen tablet mouse, but mouseClicked events do not happen with the pen unless I manage to keep the pen very still. Even a 1-pixel movement is enough to make it fail. I guess I should blame Java for this one: there's no way to specify a "click radius" for acceptible movement.

    package com.example.bugs;
    
    import java.awt.Dimension;
    import java.awt.event.MouseEvent;
    import java.awt.event.MouseListener;
    
    import javax.swing.JFrame;
    
    public class WacomMouseClickBug {
        public static void main(String[] args) {
         JFrame jframe = new JFrame();
    
         jframe.addMouseListener(new MouseListener(){
          @Override public void mouseClicked(MouseEvent event) {
           System.out.println("mouseClicked: "+event);
          }
          @Override public void mouseEntered(MouseEvent event) {}
          @Override public void mouseExited(MouseEvent event) {}
          @Override public void mousePressed(MouseEvent event) {
           System.out.println("mousePressed: "+event);
          }
          @Override public void mouseReleased(MouseEvent event) {
           System.out.println("mouseReleased: "+event);    
          }   
         });
    
         jframe.setPreferredSize(new Dimension(400,400));  
         jframe.pack();
         jframe.setLocationRelativeTo(null);
         jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
         jframe.setVisible(true);
        }
    }
    

How can I associate images with entities in Google App Engine

I'm working on a Google App Engine application, and have come to the point where I want to associate images on the filesystem to entities in the database.

I'm using the bulkupload_client.py script to upload entities to the database, but am stuck trying to figure out how to associate filesystem files to the entities. If an entity has the following images: main,detail,front,back I think I might want a naming scheme like this: <entity_key>_main.jpg

I suppose I could create a GUID for each entity and use that, but I'd rather not have to do that.

Any ideas?

I think I can't use the entity key since it might be different between local and production datastores, so I would have to rename all my images after a production bulkupload.

From stackoverflow
  • I see two options here based on my very limited knowledge of GAE.

    First, you can't actually write anything to the file system in GAE, right? That would mean that any images you want to include would have to be uploaded as a part of your webapp and would therefore have a static name and directory structure that is known and unchangeable. In this case, your idea of _main.jpg, OR /entity_key/main.jpg would work fine.

    The second option is to store the images as a blob in the database. This may allow for uploading images dynamically rather than having to upload a new version of the webapp every time you need to update images. It would quickly eat into your free database space. Here's some information on serving pictures from the database. http://code.google.com/appengine/articles/images.html

    MStodd : Yes, I would be uploading the images when I deploy the project. I think I can't use the entity key since it might be different when I bulkupload to my dev datastore and the production datastore.
    digitaljoel : yeah, you would need some sort of identifier that you have control over and is guaranteed not to change.
  • There is a GAE tutorial on how to Serve Dynamic Images with Google App Engine. Includes explanations & downloadable source code.

  • If you're uploading the images statically, you can use the key based scheme if you want: Simply assign a key name to the entities, and use that to find the associated images. You specify a key name (in the Python version) as the key_name constructor argument to an entity class:

    myentity = EntityClass(key_name="bleh", foo="bar", bar=123)
    myentity.put()
    

    and you can get the key name for an entity like so:

    myentity.key().name()
    

    It sounds like the datastore entities in question are basically static content, though, so perhaps you'd be better simply encoding them as literals in the source and thus having them in memory, or loading them at runtime from local datafiles, bypassing the need to query the datastore for them?

LIMIT in FoxPro

I am attempting to pull ALOT of data from a fox pro database, work with it and insert it into a mysql db. It is too much to do all at once so want to do it in batches of say 10 000 records. What is the equivalent to LIMIT 5, 10 in Fox Pro SQL, would like a select statement like

select name, address from people limit 5, 10;

ie only get 10 results back, starting at the 5th. Have looked around online and they only make mention of top which is obviously not of much use.

From stackoverflow
  • I had to convert a Foxpro database to Mysql a few years ago. What I did to solve this was add an auto-incrementing id column to the Foxpro table and use that as the row reference.

    So then you could do something like.

    select name, address from people where id >= 5 and id <= 10;
    

    The Foxpro sql documentation does not show anything similar to limit.

  • Take a look at the RecNo() function.

    Clinemi : I've done this a couple of times. Add a new number column, then do a replace all with RECNO(). It is the same as an identity column. I would then follow a scenario like the one described by Mark Robinson.
    Eyvind : Yeah, but you wouldn't need a new column when using recno(), you could just issue a command like SELECT * FROM people where recno() > 10 and recno() < 50
    Brian Vander Plaats : Also, recno() only respects the physical order of records in the table - if you sort the records you will not get the results you want with recno()
  • FoxPro does not have direct support for a LIMIT clause. It does have "TOP nn" but that only provides the "top-most records" within a given percentage, and even that has a limitation of 32k records returned (maximum).

    You might be better off dumping the data as a CSV, or if that isn't practical (due to size issues), writing a small FoxPro script that auto-generates a series of BEGIN-INSERT(x10000)-COMMIT statements that dump to a series of text files. Of course, you would need a FoxPro development environment for this, so this may not apply to your situation...

  • To expand on Eyvind's answer I would create a program to uses the RecNo() funtion to pull records within a given range, say 10,000 records.

    You could then programatically cycle through the large table in chucks of 10,000 records at a time and preform your data load into you MySQL database.

    By using the RecNO() funtion you can be certain not to insert rows more than once, and be able to restart at a know point in the data load process. That by it's self can be very handy in the event you need to stop and restart the load process.

Linux MMAP internals

I have several questions regarding the mmap implementation in Linux systems which don't seem to be very much documented:

When mapping a file to memory using mmap, how would you handle prefetching the data in such file?

I.e. what happens when you read data from the mmaped region? Is that data moved to the L1/L2 caches? Is it read directly from disk cache? Does the prefetchnta and similar ASM instructions work on mmaped zones?

What's the overhead of the actual mmap call? Is it relative to the amount of mapped data, or constant?

Hope somebody has some insight into this. Thanks in advance.

From stackoverflow
  • mmap is basically programmatic access to the Virtual Memory sub system.

    When you have, say, 1G file, and you mmap it, you get a pointer to "the entire" file as if it were in memory.

    However, at this stage nothing has happened save the actual mapping operation of reserving pages for the file in the VM. (The large the file, the longer the mapping operation, of course.)

    In order to start reading data from the file, you simply access it through the pointer you were returned in the mmap call.

    If you wish to "preload" parts of the file, just visit the area you'd like to preload. Make sure you visit ALL of the pages you want to load, since the VM will only load the pages you access. For example, say within your 1G file, you have a 10MB "index" area that you'd like to map in. The simplest way would be to just "walk your index", or whatever data structure you have, letting the VM page in data as necessary. Or, if you "know" that it's the "first 10MB" of the file, and that your page size for your VM is, say, 4K, then you can just cast the mmap pointer to a char pointer, and just iterate through the pages.

    void load_mmap(char *mmapPtr) {
        // We'll load 10MB of data from mmap
        int offset = 0;
        for(int offset = 0; offset < 10 * 1024 * 1024; offset += 4 * 1024) {
            char *p = mmapPtr + offset;
            // deref pointer to force mmap load
            char c = *p;
        }
    }
    

    As for L1 and L2 caches, mmap has nothing to do with that, that's all about how you access the data.

    Since you're using the underlying VM system, anything that addresses data within the mmap'd block will work (ever from assembly).

    If you don't change any of the mmap'd data, the VM will automatically flush out old pages as new pages are needed If you actually do change them, then the VM will write those pages back for you.

    Laurynas Biveinis : Wouldn't char "c = *p" be optimized away? Should c be declared volatile?
  • It's nothing to do with the CPU caches; it maps it into virtual address space, and if it's subsequently accessed, or locked with mlock(), then it brings it physically into memory. What CPU caches it's in or not in is nothing you really have control over (at least, not via mmap).

    Normally touching the pages is necessary to cause it to be mapped in, but if you do a mlock or mlockall, that would have the same effect (these are usually privileged).

    As far as the overhead is concerned, I don't really know, you'd have to measure it. My guess is that a mmap() which doesn't load pages in is more or less a constant time operation, but bringing the pages in will take longer with more pages.

    Recent versions of Linux also support a flag MAP_POPULATE which instructs mmap to load the pages in immediately (presumably only if possible)

  • One more question regarding mmap(). I would like to share memory mapped file between two different processes ? How to do?

    Jiri : just pass the same file descriptor to mmap and you have a shared memory block.
  • Answering Mr. Ravi Phulsundar's question:

    Multiple processes can map the same file as long as the permissions are set correctly. Looking at the mmap man page just pass the MAP_SHARED flag ( if you need to map a really large file use mmap2 instead ):

    mmap

    MAP_SHARED

    Share this mapping with all other processes that map this object. Storing to the region is equivalent to writing to the file. The file may not actually be updated until msync(2) or munmap(2) are called.

PHP Daemon/worker environment

Problem: I want to implement several php-worker processes who are listening on a MQ-server queue for asynchronous jobs. The problem now is that simply running this processes as daemons on a server doesn't really give me any level of control over the instances (Load, Status, locked up)...except maybe for dumping ps -aux. Because of that I'm looking for a runtime environment of some kind that lets me monitor and control the instances, either on system (process) level or on a higher layer (some kind of Java-style appserver)

Any pointers?

From stackoverflow
  • Here's some code that may be useful.

    <?
    define('WANT_PROCESSORS', 5);
    define('PROCESSOR_EXECUTABLE', '/path/to/your/processor');
    set_time_limit(0);
    $cycles = 0;
    $run = true;
    $reload = false;
    declare(ticks = 30);
    
    function signal_handler($signal) {
        switch($signal) {
        case SIGTERM :
            global $run;
            $run = false;
            break;
        case SIGHUP  :
            global $reload;
            $reload = true;
            break;
        }   
    }
    
    pcntl_signal(SIGTERM, 'signal_handler');
    pcntl_signal(SIGHUP, 'signal_handler');
    
    function spawn_processor() {
        $pid = pcntl_fork();
        if($pid) {
            global $processors;
            $processors[] = $pid;
        } else {
            if(posix_setsid() == -1)
                die("Forked process could not detach from terminal\n");
            fclose(stdin);
            fclose(stdout);
            fclose(stderr);
            pcntl_exec(PROCESSOR_EXECUTABLE);
            die('Failed to fork ' . PROCESSOR_EXECUTABLE . "\n");
        }
    }
    
    function spawn_processors() {
        global $processors;
        if($processors)
            kill_processors();
        $processors = array();
        for($ix = 0; $ix < WANT_PROCESSORS; $ix++)
            spawn_processor();
    }
    
    function kill_processors() {
        global $processors;
        foreach($processors as $processor)
            posix_kill($processor, SIGTERM);
        foreach($processors as $processor)
            pcntl_waitpid($processor);
        unset($processors);
    }
    
    function check_processors() {
        global $processors;
        $valid = array();
        foreach($processors as $processor) {
            pcntl_waitpid($processor, $status, WNOHANG);
            if(posix_getsid($processor))
                $valid[] = $processor;
        }
        $processors = $valid;
        if(count($processors) > WANT_PROCESSORS) {
            for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
                posix_kill($processors[$ix], SIGTERM);
            for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
                pcntl_waitpid($processors[$ix]);
        } elseif(count($processors) < WANT_PROCESSORS) {
            for($ix = count($processors); $ix < WANT_PROCESSORS; $ix++)
                spawn_processor();
        }
    }
    
    spawn_processors();
    
    while($run) {
        $cycles++;
        if($reload) {
            $reload = false;
            kill_processors();
            spawn_processors();
        } else {
            check_processors();
        }
        usleep(150000);
    }
    kill_processors();
    pcntl_wait();
    ?>
    
    leek : Where did you get this? Open source project or your own code? Any documentation or explanation of what exactly is going on here?
    chaos : My own code. I'm not inclined to explain it, no.
  • Do you actually need it to be continuously running?

    If you only want to spawn new process on request, you can register it as a service in xinetd.

    Sebastian : The spawning-aspect isn't a big issue imho because the number of workers is depending on the system performance which is usually constant. More important would be the monitoring aspect of the individual worker status (crashed, whatever). One tool I just discovered for this might be DJBs deamontools
    vartec : That's one option. For monitoring you could also use flock()-ed PID files. Upon crash all locks are released.
  • It sounds like you already have a MQ up and running on a *nix system and just want a way to manage workers.

    A very simple way to do so is to use GNU screen. To start 10 workers you can use:

    #!/bin/sh
    for x in `seq 1 10` ; do
    screen -dmS worker_$x php /path/to/script.php worker$x
    end
    

    This will start 10 workers in the background using screens named worker_1,2,3 and so on.

    You can reattach to the screens by running screen -r worker_ and list the running workers by using screen -list.

    For more info this guide may be of help: http://www.kuro5hin.org/story/2004/3/9/16838/14935

    Also try:

    • screen --help
    • man screen
    • or google.

    For production servers I would normally recommend using the normal system startup scripts, but I have been running screen commands from the startup scripts for years with no problems.

  • a pcntl plugin type server daemon for PHP

    http://dev.pedemont.com/sonic/

How to view code for classes in the .NET library?

I would like to look at the code for some of the classes in the .NET library. I find functions by using intellisense and just reading the tooltips that come up when I select different items.

One example is the Contains method that you can use on arrays to search for a given string. I just happened to stumble upon that while working on an exercise to learn. I'm assuming it's a simple method that just iterates through the array and checks for the string at each element.

If I wanted to take a look at this code, or code for any other class in .NET, how would I go about it? I have Visual Studio 2008.

From stackoverflow
  • Use .NET Reflector

    Lucero : The one and only answer... basically ;)
    Mehrdad Afshari : @Lucero: Not the only...
    Lucero : Well, since the question asks for "code for any other class in .NET", the framework symbols or source will not help much. Also, looking at Rotor sources can be misleading since the actual BCL implementation may differ.
    Stevo3000 : Can't beat this answer!
  • Reflector is the way to go.

    .NET Framework 3.5 BCL source code is available through debug symbols (reference license).

    SSCLI (Rotor) and Mono source code can be relevant too.

  • Either Reflector (for ease of use and universal application) or Microsoft's symbol server for the real thing.

Adding inputs with AJAX?

Is there a way in AJAX or JS to add further inputs upon a button click?

From stackoverflow
  • Further inputs? Run any JavaScript you want when a user clicks a button by adding an event listener to the button that listens for a click.

  • In short, yes you can add more inputs on a button click.

    For example, in jQuery, you could have something like this where the buttonID is the id attribute for the button and the formID is the id attribute for your form:

    $("buttonID").click(function() {
        //add new inputs here, something like:
        $("formID").append('<input type="text" id="newInput" name="newInput" />');
    });
    

    You can also have the additional inputs hidden to start off with and then 'un-hide' them on a click if you want.

  • Once a user clicks on the button, if you have an event listener, you can change what they had entered, you can do anything anything you want.

    I am not certain what you mean by 'further inputs' though. If you are sending data then you can append whatever you want, I frequently append a timestamp to help prevent caching issues, for example.

Implementing an Interface on a dynamic type with events

I am taking in an interface, looping through the .GetEvents() return array and attempting to implement the event on my dynamic type. At the point when I try to call TypeBuilder.CreateType(), I am greeted with this lovely error:

"Application method on type from assembly is overriding a method that has been overridden."

If I comment out the typeBuilder.DefineMethodOverride calls that attempt to implement the interface methods, at the poin when I attempt to subscribe to the event I get the error:

"The method or operation is not implemented."

Here is the method I have that is attempting to add the detected event to the emitted type. Just a quick note, I have other code defining the type and adding methods implementing those on the interface and all that code works fine. I had no problems until I attempted to add events into the mix.

protected static void AddEvent(EventInfo interfaceEvent, TypeBuilder proxyBuilder)
    {
        // Event methods attributes
        MethodAttributes eventMethodAttr = MethodAttributes.Public | MethodAttributes.HideBySig | MethodAttributes.Virtual | MethodAttributes.NewSlot | MethodAttributes.Final | MethodAttributes.SpecialName;
        MethodImplAttributes eventMethodImpAtr = MethodImplAttributes.Managed | MethodImplAttributes.Synchronized;

        string qualifiedEventName = string.Format("{0}.{1}", typeof(T).Name, interfaceEvent.Name);
        string addMethodName = string.Format("add_{0}", interfaceEvent.Name);
        string remMethodName = string.Format("remove_{0}", interfaceEvent.Name);

        FieldBuilder eFieldBuilder = proxyBuilder.DefineField(qualifiedEventName,
            interfaceEvent.EventHandlerType, FieldAttributes.Public);

        EventBuilder eBuilder = proxyBuilder.DefineEvent(qualifiedEventName, EventAttributes.None, interfaceEvent.EventHandlerType);

        // ADD method
        MethodBuilder addMethodBuilder = proxyBuilder.DefineMethod(addMethodName,
            eventMethodAttr, null, new Type[] { interfaceEvent.EventHandlerType });

        addMethodBuilder.SetImplementationFlags(eventMethodImpAtr);

        // We need the 'Combine' method from the Delegate type
        MethodInfo combineInfo = typeof(Delegate).GetMethod("Combine", new Type[] { typeof(Delegate), typeof(Delegate) });

        // Code generation
        ILGenerator ilgen = addMethodBuilder.GetILGenerator();
        ilgen.Emit(OpCodes.Ldarg_0);
        ilgen.Emit(OpCodes.Ldarg_0);
        ilgen.Emit(OpCodes.Ldfld, eFieldBuilder);
        ilgen.Emit(OpCodes.Ldarg_1);                    
        ilgen.Emit(OpCodes.Call, combineInfo);            
        ilgen.Emit(OpCodes.Castclass, interfaceEvent.EventHandlerType);    
        ilgen.Emit(OpCodes.Stfld, eFieldBuilder);  
        ilgen.Emit(OpCodes.Ret);

        // REMOVE method
        MethodBuilder removeMethodBuilder = proxyBuilder.DefineMethod(remMethodName,
            eventMethodAttr, null, new Type[] { interfaceEvent.EventHandlerType });
        removeMethodBuilder.SetImplementationFlags(eventMethodImpAtr);

        MethodInfo removeInfo = typeof(Delegate).GetMethod("Remove", new Type[] { typeof(Delegate), typeof(Delegate) });

        // Code generation
        ilgen = removeMethodBuilder.GetILGenerator();
        ilgen.Emit(OpCodes.Ldarg_0);
        ilgen.Emit(OpCodes.Ldarg_0);
        ilgen.Emit(OpCodes.Ldfld, eFieldBuilder);    
        ilgen.Emit(OpCodes.Ldarg_1);                 
        ilgen.Emit(OpCodes.Call, removeInfo);            
        ilgen.Emit(OpCodes.Castclass, interfaceEvent.EventHandlerType);   
        ilgen.Emit(OpCodes.Stfld, eFieldBuilder);  
        ilgen.Emit(OpCodes.Ret);

        // Finally, setting the AddOn and RemoveOn methods for our event
        eBuilder.SetAddOnMethod(addMethodBuilder);
        eBuilder.SetRemoveOnMethod(removeMethodBuilder);

        // Implement the method from the interface
        proxyBuilder.DefineMethodOverride(addMethodBuilder, typeof(T).GetMethod("add_" + interfaceEvent.Name));

        // Implement the method from the interface
        proxyBuilder.DefineMethodOverride(removeMethodBuilder, typeof(T).GetMethod("remove_" + interfaceEvent.Name));

    }

Google has been 0 help on this (searching for "is overriding a method that has been overridden" just returns a lot of Crystal Reports topics), and I have been struggling with this all morning. Any help would be greatly appreciated!

From stackoverflow
  • Nevermind. When i loop through the method's of the interface to implement them I am accidentally implementing the add/remove also.

    fixed it with this in my loop: if(method.IsSpecialName) continue;

How to find the local instaces using SQLSERVER SMO??

Hi is there is any way to find out the local instaces installed in a single physical machine.

scenario:

Suppose i have installed sqlserevr2005 and sqlexpress on same machine.is there any way to find these instances using SMO..

and i have installed the datafiles of two instaces in different locations.

how to find these datafiles location both at a time

output should be like:

Sqlinstances:

 sqlravikiran
 sqlravikiran\express

files:

D:\datafiles\master.mdf(sqlserver) f:\datafiles\master.mdf(sqlexpress)

how to do this

From stackoverflow
  • Here is a nice article in MSDNMAG and here the intro page on MSDN.

    But please try to update and improve your question instead of posting a new and only slightly different one.

    Cute : Hi thanks for info...keep rocking.it helps me a lot..

Storing jquery code in external file and directory structure?

Hi there,

Can anyone can confirm the best idea for storing jquery code, initially i was inserting everything in the HEAD of a document but i am seeing more and more code being used across pages,

Is the best way to use include files .. with the extension .JS?

Anything else or better with respect to this?

What would be the best place to store my .js file if this is the case..

of the root i have

/css /scripts

etc ... /scripts is where my jquery files are... but should i be using the same for my .js files?

Anyone have some recommendation on directory structure?

From stackoverflow
  • You should store as much code as you can in .js files, because that would allow the browser to download just once and re-use in all requests.

    If you include your code directly in the HEAD section as plain text you're forcing that code to be downloaded in each request, slowing down the page transfer.

    The directory where you store the files is up to you really... I use /css and /js, but /scripts is used as well in many ocassions.

  • Where you insert your JavaScript on the page depends on what you're trying to do. There are arguments to link your JavaScript at the end of the page for speed's sake, so that the browser can load the page before attempting to parse/execute any JavaScript. Of course, if you need the JavaScript to execute before your page loads, you'd need to put in the head of your document. It's really up to you and your needs.

    .js is the standard extensions for JavaScript files, so it'd be preferable to keep using that. It's a good idea to link to your JavaScript files instead of putting them right on the page, so the user's browser can cache the JavaScript files.

    Where you store the scripts is up to you, but it seems like a fine idea to store all your jQuery files in a /scripts directory.

  • It doesn't need to be too complicated. Yes, storing with the JS extension for sure, but keeping your Javascript seperate from 3rd party libraries is also recommended.

    This is about all you would need:

    .
    ./js/jquery/jquery.js
    ./js/yourfiles.js
    

    Or, put your files under your company name:

    ./js/companyname/yourfiles.js
    

    And as others have said: Put your JS at the end of the file if at all possible.

  • thank you everyone for the confirmations a feel a bit more prepared now

  • Seb is correct, but to add one thing, try to compress all of your JS and CSS files into one for each.

    Regardless whether or not the files are cached, there is still a round-trip request to the server for each individual file to check if it has been updated. This can slow down the user's experience when there are many JS and CSS files.

How do I search the collection of a collection in my LINQ Where clause?

I've got the following ADO.NET Entity Framework Entity Data Model:

ADO.NET Entity Data Model

I want to find all the Policyholders with both a Service of a given Id and also a Keyword of a given Status.

This LINQ Does Not Work:

Dim ServicesId As Integer = ...
Dim KeywordStatus As Integer = ...

Dim FoundPolicyholders = From p As Policyholder In db.PolicyholderSet.Include("Keywords").Include("Services") _
                         Where p.Services.Id = ServicesId _
                         And p.Keywords.Status = KeywordStatus _
                         Select p

The Where clause cannot search the p.Services and p.Keywords EntityCollections in that way.

'Id' is not a member of 'System.Data.Objects.DataClasses.EntityCollection(Of ....Service)'.

What is the correct LINQ syntax to do what I want?

From stackoverflow
  • db.PolicyholderSet.Where(ph =>
       ph.Services.Any(s => s.Id == someId) &&
       ph.Keywords.Any(kw => kw.Status == someStatus))
    

    Why does your query not work? Because p.Services and p.Keywords are a Service and a Keyword collection - they have not property Id or Status hence you cannot use p.Services.Id or p.Keywords.Status.

    Visual Basic…

    Dim FoundPolicyholders = From p As Policyholder In db.PolicyholderSet.Include("Keywords").Include("Services") _
                             Where p.Services.Any(Function(s) s.Id = ServicesId) _
                             And p.Keywords.Any(Function(k) k.Status = KeywordStatus) _
                             Select p
    
    Zack Peterson : The function "Any" was the clue I needed. Thank you.

LINQ to SQL - No Add method available

I have created a LINQ to SQL datacontext with a single datatable in it. I am trying to simply insert a new record into that table. The problem I am coming across is LINQ is not offering an Add method to pass in the new record to. I have seen countless examples where there is an Add method, but I can't seem to get it. Am I completely missing something or is it something else?

using (praetoriaTestDataContext db = new praetoriaTestDataContext())
{
    PageHit hit = new PageHit();
    hit.DateViewed = DateTime.Now;
    hit.Page = "test";

    db.PageHits.Add(hit); //Add method is not available!
    db.SubmitChanges();
}

Thanks!

From stackoverflow
  • With LINQ-to-SQL, you want PageHits.InsertOnSubmit

  • Table's Add and Remove methods have been renamed to InsertOnSubmit and DeleteOnSubmit.

    db.PageHits.InsertOnSubmit(hit);
    
    Dan Appleyard : Thanks Steve. I am going to pick you b/c you added the Remove/DeleteOnSubmit info too.

Constraint for one-to-many relationship

We have a two tables with a one-to-many relationship. We would like to enforce a constraint that at least one child record exist for a given parent record.

Is this possible?

If not, would you change the schema a bit more complex to support such a constraint? If so how would you do it?

Edit: I'm using SQL Server 2005

From stackoverflow
  • Such a constraint isn't possible from a schema perspective, because you run into a "chicken or the egg" type of scenario. Under this sort of scenario, when I insert into the parent table I have to have a row in the child table, but I can't have a row in the child table until there's a row in the parent table.

    This is something better enforced client-side.

  • It's possible if your back-end supports deferrable constraints, as does PostgreSQL.

    Tony Andrews : And as does Oracle
  • How about a simple non nullable column?

    Create Table ParentTable
    (
    ParentID
    ChildID not null,
    Primary Key (ParentID), 
    Foreign Key (ChildID ) references Childtable (ChildID));
    )
    

    If your business logic allows and you have default values you can query from the database for each new parent record, you can then use a before insert trigger on the parent table to populate the non nullable child column.

    CREATE or REPLACE TRIGGER trigger_name
    BEFORE INSERT
        ON ParentTable
        FOR EACH ROW 
    BEGIN
    
        -- ( insert new row into ChildTable )
        -- update childID column in ParentTable 
    
    END;
    
    Matt Kane : This doesn't allow for a one-to-many relationship.
  • Here's an idea, in pseudo-SQL:

    CREATE TABLE Parent (
        id integer primary key,
        child_relation_id integer not null references child_relation
    );
    
    CREATE TABLE child_relation (
        id integer primary key
    );
    
    CREATE TABLE child (
        id integer primary key,
        child_relation_id integer not null reference child_relation
    );
    

What's the difference between a dll's FileVerison and ProductVersion?

What's the difference between a dll's FileVersion and ProductVersion?

Specifically at runtime, is one used for strong binding, and the other informational?

I'd like to have one set manually, and the other incremented automatically (via our CI build process)

Edit: Richard answered the part I missed in the original question. It's Assembly version that I want to manually control (incrementing with interface changes) while it's File Version that I want my CI system to automatically increment with every build. Thanks.

From stackoverflow
  • Neither is used for strong binding (the version aspect of the full/strong name comes from the AssemblyVersion attribute).

    Both file version (from AssemblyFileVersion attribute) and product version (from AssemblyInformationalVersion attribute) contribute to the version resource (as seen in explorer's file properties).

    Other than for display/diagnostic purposes, the only real use is by installers to validate a file should be replaced.

    Addendum: why would these be different? Answer: Because of versioning requirements. Keeping Assembly Version the same means an updated version (with higher file version) will bind without change. This has a big impact on serialisation (e.g. persisted Workflows).

    File and product versions are only likely to be different if the assembly in question is not just part of one product (e.g. a reusable third party library), if just used in a single application there seems little reason not to keep them the same.

    Peter Lillevold : So in practice, why would we want these three to be different? Imo AssemblyVersion should rule, no?
    Richard : @Peter: good question... expanded with some brief comments, but that really needs a question of its own.
  • Files are distributed as part of a larger project. A file with individual build version x might be distributed as part of project version y.

BASE24 tutorial or learning material

Gentlemen,

Does someone know where I can find a tutorial or material relating to Base24? I'm referring to Base24 the product by ACI. My understanding is that they use a programming language called 'TAL' which has similarities to Cobol and C.

I've searched the net and I have only found job opportunities for Base24 developers but hardly any learning resources.

Cheers

From stackoverflow

Should a web browser delete all `session' (expiry = 0) cookies on exit?

Everything I read about cookies says that setting the expiry time of a cookie to zero should make it a `session' cookie, which the browser will then delete upon exit.

http://www.cookiecentral.com/faq/ says that :

"...generally a session is the length of time that the browser is open for..."

http://uk2.php.net/manual/en/function.setcookie.php says :

"If set to 0, or omitted, the cookie will expire at the end of the session (when the browser closes)."

However, some experimenting in Firefox (3.0.8) shows that:

  • cookies set as session and secure get deleted on exit
  • cookies set as session only do not get deleted on exit

Opera (9.64) behaves as I would expect, deleting the session cookies upon exit whether set as secure or not.

I wanted to be able to rely on this in a web-app I'm working on (having a secure cookie and an insecure cookie as a "logged-in" flag and having them expire together, either with a real time or 0 for a session), but it seems that even if it's in the standard then browsers are not consistent enough to rely on it :/

Is this a bug in the browser, expected behaviour, and/or is the actual lifetime of session cookies not really defined in the standard?

From stackoverflow
  • You should never rely on client-side features.

    The feature you're working on is usually implemented storing the session ID client-side and the real user info server-side (its ID, whether he's logged in or not, his personal info, etc).

    Also bear in mind cookies get sent in every request, so the less you store in a cookie, the better.

    dwc : Note that "never rely" doesn't mean you can't try to take advantage of client-side features. Set cookie expire times as it should work, but make completely sure your server-side code handles old, invalid cookies.
    Seb : Totally agree with dwc. You _can_ take advantage of client-side features; just don't think they will be always available with every user and every request.
    Legooolas : That's a really rather important point that I have completely missed. Assuming that the expiry will work for this in all browsers will always give me this problem. I guess I'll have to find another way to do this... and it seemed quite elegant until now :/