Friday, May 6, 2011

How to add list in ListBox?

Hello again; i need to show X,Y,Risk in ListBoxes. But i can not do it.

public partial class _Default : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (!IsPostBack)
        {
            List<TrainigSet> listtraining = new List<TrainigSet>();
            TrainigSet tr = new TrainigSet();
            double[] X = { 2, 3, 3, 4, 5, 6, 7, 9, 11, 10 };
            double[] Y = { 4, 6, 4, 10, 8, 3, 9, 7, 7, 2 };
            string[] Risk = { "Kötü", "iyi", "iyi", "kötü", "kötü", "iyi", "iyi", "kötü", "kötü", "kötü" };
            for (int i = 0; i < X.Length; i++)
            {
                tr.X = X[i];
                tr.Y = Y[i];
                tr.Risk = Risk[i];
                listtraining.Add(tr);
            }
            for (int i = 0; i < listtraining.Count; i++)
            {
                ListBox1.Items.Add(listtraining[i].X.ToString());
                ListBox2.Items.Add(listtraining[i].Y.ToString());
                ListBox3.Items.Add(listtraining[i].Risk.ToString());
            }
        }
    }
}

public class TrainigSet
{
    public double X { get; set; }
    public double Y { get; set; }
    public string Risk { get; set; }
}
From stackoverflow
  • You have to move the instantiation/creation of the TrainingSet into the for loop (you want to create a new instance for every item you add to listtraining):

    double[] X = { ... };
    double[] Y = { ... };
    string[] Risk = { ... };
    
    for (int i = 0; i > X.Length; i++)
    {
        TrainigSet tr = new TrainigSet(); // create a new TrainingSet
        ...
        listtraining.Add(tr);
    }
    

    Otherwise you will modify the same TrainingSet instance over and over again.

  • You could also use a TrainingResult class with a public X,Y, and Risk like this

    public class TrainingResult
    {
       public double X{get;set;}
       public double Y{get;set;}
       public string Risk {get;set};
    }
    

    And create a list of those. Next you could bind to it like this:

    ListBoxX.DataSource = List<TrainingResult>
    ListBoxX.DataMember = "X";
    ListBoxX.DataBind();
    
    ListBoxX.DataSource = List<TrainingResult>
    ListBoxY.DataMember = "Y";
    ListBoxY.DataBind();
    
    ListBoxRisk.DataSource = List<TrainingResult>
    ListBoxRisk.DataMember = "Risk";
    ListBoxRisk.DataBind();
    

    The advantage of this being that you have a more clear relation between x,y and risk and get more readable (to me) code. Disadvantage is offcourse the DataMember being a string value.

How do I create a custom Outlook Item?

I understand that Outlook has set items, i.e. Mail, Task, Calendar, Notes, etcetera. How can you create a custom Item that Outlook will recognize as the others? I know that when you add the Business Contact Manager it creates Items like "Opportunities"

Can you override an Item, or inherit an Item and alter/add properties and methods?

examples:

olAppointmentItem           1         Represents an AppointmentItem 
olContactItem               2         Represents a ContactItem 
olDistributionListItem      7         Represents an DistListItem 
olJournalItem               4         Represents a JournalItem 
olMailItem                  0         Represents a MailItem 
olNoteItem                  5         Represents a NoteItem 
olPostItem                  6         Represents a PostItem 
olTaskItem                  3         Represents a TaskItem
From stackoverflow
  • Outlook has the ability to create custom forms. You use the forms designer bultin to outlook, there is one built all versions of Outlook. You can launch a design session with the Tools | Forms | Design a Form command. Alternatively, open any Outlook item in Outlook 2003 or earlier and choose Tools | Forms | Design This Form.

    When you design a form you start based on on of the exiting form such a appointment, task etc.. The closest thing to a blank form is the post form.

    Forms can have VBScript code behind them to react to user actions -- validating data, synchronizing it with databases, creating new Outlook items, etc. To add code, once you're in form design mode, click the View Code command on the toolbar or ribbon.

    You can then publish you form into the Organization Forms library, so that everyone has access to them. They can also be published directly to a folder. Personal forms are published either to a folder or to your Personal Forms library.

    There is quite a lot of help documentation for this kind of thing in Outlook Help, also google will return loads of sites that show you how.

    : this question seems to come up a lot... doesn't it?
  • You cannot create new "types"; but you can certainly re-use the existing types by adding your own properties.

  • You cannot create new "types"; but you can certainly re-use the existing types by adding your own properties.

    That comment is not correct. you can certainly use custom forms, you just need to publish them first to a forms library, and make them accesible to users. generally they are based on the design of one of the default item types, and can also be associated with a folder as the default item type.

    Edit: (updating post as per comment request)

    A.Create and publish a custom form - http://office.microsoft.com/en-au/outlook/HA012106101033.aspx

    B. programmatically create an instance of the custom form.

    Outlook.Application olApp = new Outlook.Application();
        //mapifolder for earlier versions (such as ol 2003)
        Outlook.Folder contacts = olApp.Session.GetDefaultFolder(Outlook.olDefaultFolders.olFolderContacts);
        //must start with IPM.   & must be derived from a base item type, in this case contactItem.
        Outlook.ContactItem itm = (Outlook.ContactItem)contacts.Items.Add(@"IPM.Contact.CustomMessageClass");
        itm.Display(false);
    
    John Saunders : Could you show an example, please?
    tecmo : I guess it depends on what you mean by type. You can create new forms but it must be based on one of the existing built in Outlook types.

What's all the binary stuff I get from Perl's Image::Info::image_info()?

I get the following output after running the code for a picture

[1459]./image_info.pl lah.png 
$VAR1 = {
          'width' => 206,          'file_media_type' => 'image/png',
          'file_ext' => 'png',
          'PNG_Chunks' => [
                            'IHDR',
?V?????O?H??^#?C&?fu?M?5V??m6???M?s',
                            'IEND' 9   :˺??:?E??(;t??[/????o?4?4?O??TܲD
#PJ?EHͨ??ƥ8???#u   ?t??1?I/=?!w"???)?,??????X?|?{                                              M?N??A?  V``?&?
{8.?"???I)?W?_??|k?.c??l??s?8?~^Z??????_;?,,+,/?4~]ů?ZìU?+???i?s`C}??/?_??>?d~?lrn?n^???2???z?-???B??n?D;??aXHoeh?3???
DA5?N?Aw??? ???J?-????P?> 'RGB',
C?~&?1?cd 'heiga~H.`ha162,         H2?I???P?p?HsZ?&?P? Y`??;?q4Kov??3?Z???L???? ?F??&???aq?H???????"Ri?F? ??ٵ???L  B??r??H%??@??٩qiLJ?pres??on' => 'Deflate',
        ??/?Z?w,?k???g?=> '2835 dpm',
          'Compression' ? ((~??_^A ?c?vV??w????m,7????Eb???0J5?? ??? ????9????:?,24m[1460]

I do not understand the last bit of the file.

How can you change the encoding to be readable?

From stackoverflow
  • You're displaying a binary chunk as text. That's not gonna be readable in any encoding.

  • Redirect the output of your script to a file:

    ./image_info.pl lah.png >log.txt 2>&1
    

    Then open log.txt in your favorite GUI text editor (e.g. kate, gedit, notepad++), specifying ISO-8859-1 or UTF-8 in the open dialog. Try both encodings.

    Masi : Your command does not work.
    pts : It does work for me on Linux. Could you please copy-paste the error message you get?
  • The Perl Image::Info module shouldn't be displaying all of that encoded data at all. My own tests of that module have never done that, e.g:

    $VAR1 = {
              'width' => 58,
              'file_media_type' => 'image/png',
              'file_ext' => 'png',
              'PNG_Chunks' => [
                                'IHDR',
                                'IDAT',
                                'IEND'
                              ],
              'PNG_Filter' => 'Adaptive',
              'color_type' => 'Gray',
              'height' => 56,
              'SampleFormat' => 'U8',
              'Compression' => 'Deflate',
              'resolution' => '1/1'
            };
    

    Try it on another PNG file, this one looks like it might be corrupted.

    Masi : @Alnitak: I run the script on Google's logo. I get a long list of different colors. Is the list a complete list of the colors in Google's logo?

PostgreSQL trouble on windows PHP

I'm using WAMP on windows, which installs PHP, Apache and MySQL.

I'm now working on something new that requires PostgreSQL. The current install won't do it for me, as I keep getting these errors:

Call to undefined function pg_query()

Always

undefined function

I've installed PostgreSQL 8.3.7-1 for windows, added php_pgsql.dll,php_pdo_pgsql.dll and even libpq.dll, which a note on the PHP page for postgreSQL says Windows users need starting from PHP 5.2.6

Still, I keep getting these errors...

Can someone advise the best course of action? Or should I just uninstall apache and everything else, and do a fresh install of each component seperatly?

From stackoverflow
  • Did you enable it in the php ini file?

    What does a call to phpinfo() say is installed for extensions?

    WebDevHobo : the phpinfo page show nothing on postegre. Yet when I check inside my php.ini file, it's enabled...
  • xampp doesn't "tell" apache/php which php.ini to use. Therefore php uses its default lookup strategy to find the .ini file. If you haven't changed anything this will be the one in the directory where the apache binary is located, xampp/apache/bin/php.ini. Did you edit this file and removed the semicolon before extension=php_pgsql.dll ? When in doubt ask

    echo 'php.ini: ', get_cfg_var('cfg_file_path');
    which file you have to edit.
    xampp installs php as a module by default and you have to restart the apache in order to get php to read the php.ini again. After that
    echo extension_loaded('pgsql') ? 'yes':'no';
    should print yes. If it doesn't stop the apache service, open a command shell, go to your xampp directory and enter
    apache_start.bat
    This will start the apache as a console application and you can see startup errors in this console (instead of windows' event manager). If a dll is missing you will get a message box.

    WebDevHobo : thanks, but I'm not using XAMPP, I'm using WAMP.
    WebDevHobo : As for the semicolon, yes, I removed it.
  • Depending on what kind of errors you see in the Apache's error.log the answers on this question might be helpful.

What does .h and .m stand for?

Exact duplicate:

Why do Objective C files use the .m extension?

I'm thinking .h stands for header. I suppose .m could stand for main, but I don't know. Do any of you actually know this?

Just to clarify, I know what goes in which file, i.e. I know the purpose of each filetype, I'm just curious if the filetype symbol has a meaning.

From stackoverflow
  • right, .h stands for the header file. .m is for your code like .cpp in c++

    regards, buk

  • Look at this other question:

    http://stackoverflow.com/questions/652186/why-do-objective-c-files-use-the-m-extension

    tmadsen : Thanks :) should have done a more extensive search on this site.
    Sergio Acosta : StackOverflow is amazing. Honestly, I was about to answer your question and was googling for sources to cite. But in the top search results I found the link to the already answered question. =)
    • h - header
    • m - method
    Kris : hmm, i thought it was "H"eader and i"M"plementation
  • .h stands for header while .m stands for implementation

MySQL MyISAM table performance... painfully, painfully slow

I've got a table structure that can be summarized as follows:

pagegroup
* pagegroupid
* name

has 3600 rows

page
* pageid
* pagegroupid
* data

references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row

userdata
* userdataid
* pageid
* column1
* column2
* column9

references page; has about 300,000 rows; can have about 1-50 rows per page

The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long:

SELECT userdata.column1, pagegroup.name
FROM userdata
INNER JOIN page USING( pageid )
INNER JOIN pagegroup USING( pagegroupid )

Please help by explaining why does it take so long and what can i do to make it faster.

Edit #1

Explain returns following gibberish:

id  select_type  table      type    possible_keys        key      key_len  ref                         rows    Extra
1   SIMPLE       userdata   ALL     pageid                                                             372420
1   SIMPLE       page       eq_ref  PRIMARY,pagegroupid  PRIMARY  4        topsecret.userdata.pageid   1
1   SIMPLE       pagegroup  eq_ref  PRIMARY              PRIMARY  4        topsecret.page.pagegroupid  1

Edit #2

SELECT
u.field2, p.pageid
FROM
userdata u
INNER JOIN page p ON u.pageid = p.pageid;
/*
0.07 sec execution, 6.05 sec fecth
*/

id  select_type  table  type    possible_keys  key      key_len  ref                rows     Extra
1   SIMPLE       u      ALL     pageid                                              372420
1   SIMPLE       p      eq_ref  PRIMARY        PRIMARY  4        topsecret.u.pageid 1        Using index

SELECT
p.pageid, g.pagegroupid
FROM
page p
INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid;
/*
9.37 sec execution, 60.0 sec fetch
*/

id  select_type  table  type   possible_keys  key          key_len  ref                      rows  Extra
1   SIMPLE       g      index  PRIMARY        PRIMARY      4                                 3646  Using index
1   SIMPLE       p      ref    pagegroupid    pagegroupid  5        topsecret.g.pagegroupid  3     Using where

Moral of the story

Keep medium/long text columns in a separate table if you run into performance problems such as this one.

From stackoverflow
  • One possible issue is that MySQL uses only one index per query and maybe you don't have a single index with those columns -- or MySQL's query optimizer isn't picking it. What does EXPLAIN SELECT &c tell you here?

  • The easy way to figure out what MySQL is doing with your query is to have it explain the query to you. Run this and have a look at the output:

    EXPLAIN SELECT userdata.column1, pagegroup.name
    FROM userdata
    INNER JOIN page USING( pageid )
    INNER JOIN pagegroup USING( pagegroupid )
    

    MySQL will tell you in which order it processes the queries and what indexes it uses. The fact that you created indexes does not mean that MySQL actually uses them.

    See also Optimizing queries with EXPLAIN

    EDIT

    The output of your EXPLAIN looks fine. It does a full table scan on the userdata table, but that is normal since you want to return all rows in it. The best way to optimize this is to rethink your application. Do you really need to return all 372K rows?

    Salman A : I've revised my question and added the result from explain command. Appears to be using correct indices but still 128 seconds just to execute.
    Sander Marechal : I have updated my response as well.
  • I would start with breaking the query up, to figure out if there is one slow and one fast part, or if both are slow (sorry, I'm no fan of the USING syntax, so I'm going to use ON):

    SELECT 
      u.userdata, p.pageid
    FROM
      userdata u
      INNER JOIN page p ON u.pageid = p.pageid
    
    SELECT 
      p.pageid, g.pagegroupid
    FROM
      page 
      INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid
    

    What does that give you? Running these with EXPLAIN EXTENDED will provide additional hints.

    Salman A : I've posted the output of the two queries. Explain extended returns similar queries in different syntax.
    Tomalak : Looks like the second query is the trouble maker. Please include the info what indexes you have in place.
    Salman A : primary keys + all keys used in joins are indexed. For the three tables I have indexes on pagegroup.pagegroupid (PK), page.pageid (PK), page.pagegroupid (INDEX), userdata.userdataid (PK), userdata.pageid (INDEX), userdata.column1 (INDEX)
  • Looks like you're doing a join on all rows on userdata and then trying to select everything. That is every page in a pagegroup with userdata. Where's the WHERE clause? There's no LIMIT, how many results did you want? Why don't you get your row count down on userdata row in your explain result, that should speed up the query. Heh.

    Salman A : I need a dump of selected columns from userdata along with pagegroup.name for cross reference. I believe it should work fast enough if there was no "mediumtext" column in the page table.
    apphacker : Maybe you want to start recording this information in a log as it comes in instead of using SQL, maybe think about something other than SQL for this data, like non-normalized berkeley db or something.
  • What's the data type and purpose of columnX in the userdata table? It should be noted that any text data type (i.e excluding char, varchar) forces any temporary tables to be created on disk. Now since you're doing a straight join without conditions, grouping or ordering, it probably won't need any temporary tables, except for aggregating the final result.

    I think it would also be very helpful if you show us how your indexes are created. One thing to remember is that while InnoDB concatenates the primary key of the table to each index, MyISAM does not. This means that if you index column name and search for it with LIKE, but still want to get the id of the page group; Then the query would still need to visit the table to get the id instead of being able to retrieve it from the index.

    What this means, in your case, if I understand your comment to apphacker correctly, is to get the name of each users pagegroups. The query optimizer would want to use the index for the join, but for each result it would also need to visit the table to retrieve the page group name. If your datatype on name is not bigger than a moderate varchar, i.e. no text, you could also create an index (id, name) which would enable the query to fetch the name directly from the index.

    As a final try, you point out that the whole query would probably be faster if the mediumtext was not in the page table.

    1. This column is excluded from the query you are running I presume?
    2. You could also try to separate the page data from the page "configuration", i.e. which group it belongs to. You'd then probably have something like:
      • Pages
        • pageId
        • pageGroupId
      • PageData
        • pageId
        • data

    This would hopefully enable you to join quicker since no column in Pages take up much space. Then, when you needed to display a certain page, you join with the PageData table on the pageId-column to fetch the data needed to display a particular page.

    Salman A : Answer: #1 - Yes, i am not SELECTing the data column; #2 - Yes thats a workaround which "should" work, another possibility is to de-normalize the table a tad bit and add pagegroupid into userdata but the question is if there is something wrong in the table structure or the query.
    Salman A : most of the columns are varchar 100s including pagesource.name and userdata.field2
    Salman A : It worked after I moved the "data" column into a separate table "pagetemp" that relates 1-to-1 with the page table. None of the indexes work otherwise.
  • I'm assuming the userdata table is very large and does not fit in memory. MySQL would have to read the entire table from harddisk, even if it needs only two small columns.

    You can try to eliminate the need for scanning the entire table by defining an index that contains everything the query needs. That way, the index is not a way to facilitate a search into the main table, but it's a shorthand version of the table itself. MySQL only has to read the shorthand table from disk.

    The index could look like this:

    column1, pageid
    

    This has to be non-clustered, or it would be part of the big table, defeating its purpose. See this page for an idea on how MySQL decides which index to cluster. The easiest way seems to make sure you have a primary key on pageid, which will be clustered, so the secondary column1+pageid index will be non-clustered.

    Salman A : I tried creating a two column index (pageid-sourceid) on the page table in order to create a short-circuit between userdata and pagesource which reduced execution time but not much.
    Andomar : Your comment above says performance got better with a 1:1 table. That really seems to suggest the index on (pageid,sourceid) was clustered in some way. Oh well... problem solved I guess.

javaFX as a webservice client?

I need to make calls to a webservice from a javaFX client. is there some sort of wsimport type tool that I can use to generate javaFX client stubs from a deployed WSDL.

From stackoverflow
  • You could make Java class stubs and call them from javaFX.

  • thank you for your response, i'm not familiar with javafx, i'm just trying to help a javaFX team integrate with my web service. javaFX can instantiate/call java classes?

    Yishai : Absolutely. The other way around is a little hard (have Java call javaFX code) but still doable. For the simple case of calling Java from JavaFX, start here: http://jfx.wikia.com/wiki/FAQ#How_do_I_refer_to_a_fully-qualified_java_class.3F
  • Yes,

    Suppose you have a "MyJavaClass.java"

    import somePackadge.MyJavaClass
    ...
    
    var myObject:MyJavaClass = new MyJavaClass();
    
    myObject.setSomething("this is something");
    
    println("{myObject.getSomething()}");
    
    ...
    

    Refer to "http://java.sun.com/developer/technicalArticles/scripting/javafx/javafx_and_java/" for more information

  • JavaFX Supports only RESTfull webservices out of the box. You can however use existing Java tools for generating WSDL stubs and use the generated classes from your JavaFX Script classes.

  • Working JavaFX examples for JSON, RSS, Google API, Yahoo API - http://jfxstudio.wordpress.com/2009/07/24/javafx-web-services/

edit a body of text from the command line in C

I am writing some small program in C--some log in, log out, create account, send message to another account deal--that I can build on and change and innovate and polish up to refine my skills in C.

What I am trying to figure out how to implement is part of the "compose message" option. After you are prompted for the recipient, subject, and body of the message, I want to be able to go back and edit them--without having to re-type the whole thing. So if I chose the "edit message" option it would write out what I had and let me go back and change something.

I've done something sort of like this in x86 assembly, or started to do something like this, so I understand basically how this would be implemented at the machine level but I don't know how to implement it in C. Does anyone know how to do this? Things like:

  • a good way to handle potentially lengthly input
  • how to write out text and allow the user to edit it, without going back beyond a certain point
  • how to control the position of the cursor

are baffling me in C.

Also, if this has been done before, and there exists a library of functions for things like this (even a standard library I don't know about) please note that I do want to roll my own for the purpose of learning.

Edit

Alright so I guess the method I have in my head is to read the message body one character at a time, so that I can account for carriage returns and create a multi-line message. But I'm not sure how I would backspace through it. I guess it would be really hard to do this from the command line? If not impossible, to move the cursor back and erase characters which are already out of the input buffer... Would I have to "re-draw the screen" every time? Like can I just take control of the whole console and just read and write keystrokes to certain positions? Or is this too close to the machine? I sort of did it with assembly but that used 16 bit interrupts, which I'm not allowed to use in C... This is what I wrote in assembly:

alt text

where the program would convert a byte value to two character codes representing that byte, then jump over to the right column and write the original byte (which showed up as a character), then jump back and write out two more hex numbers, in the next slot... and so on, left to right, top to bottom... it was easy, but I have NO idea how I would implement that in C. All I can do is INT 21 style input and output, writing lines to the console which scrolls the window up and so forth.

From stackoverflow
  • I wrote a text editor.

    I've been doing C for twenty years.

    The data structure for the text was the most complex single data structure I have ever written; this being a data structure which can, as you specify, efficiently handle arbitrary length text.

    If you are new to C, you are biting off more than you can chew.

    I suggest a simple data structure, like a buffer or linked list - you can't handle arbitrary length text, but it's better than nothing.

    Carson Myers : well what I was thinking is that I could read the input one character at a time, add it to a buffer, when the buffer fills up realloc() it, etc. But what I don't know how to do is backspace through it... I'll update my question, I've thought about the process a bit
    Blank Xavier : realloc() typically will memcpy() the existing buffer into the new buffer. This is not a scalable. It's fine for a single line - but if you only have a single line, why not just malloc() 1024 bytes and be done with it.
    Carson Myers : what if I doubled the size of the buffer at each realloc()? That way it would grow exponentially... besides I'm finding that the biggest issue is being able to edit text in the buffer
    Blank Xavier : With a flat buffer the straightfoward approach, when a character is added or removed, is to memcpy() the buffer to the right of the insert/delete point up or down by one. This as you can imagine sucks :-) but it's simple and is tolerable for a small buffer.
  • That is highly related to the system (OS). I think you are trying to do this on Windows.

    You can use the Windows Console API to do it.

    If you want to study some libraries for reference (before you roll your own), a good library is the GNU Readline.

    Blank Xavier : The guy specified he wants to roll his own.
    Francis : The Windows Console API is for him to roll his own. The Readline is good for studying.
    Carson Myers : alright, I'll look at that API and see how to use it, and read that library to see how I might implement it
  • You say you want to avoid using libraries (standard or otherwise), but unfortunately in C all input & output is performed via libraries - the language itself has absolutely no input/output facilities.

    So you are doomed to use libraries of some sort. Given that you seem to want a textual interface, I suggest vtaking a look at the portable version of curses at http://pdcurses.sourceforge.net.

    Carson Myers : alright I shall take a look at that
    Chris Lutz : Is PDCurses any more portable than ncurses?
    anon : I've only used it on Windows, and I haven't used ncurses, so I can't really comment.
  • There is no way in ANSI C to make a portable line-editor. If you roll your own, you will have to reroll it for every new operating system you want your program to work on.

    If I may make a suggestion, I would use a pre-existing library to do all of that hard, platform-specific dirty work, and with that leg-up, learn how to handle things like arbitrary-length input and such. Then, when your code works (and is good), learn how to do all that dirty work, and take away the library-crutch. That way, you're not tackling the whole thing - you're breaking it down into more manageable parts.

    Even this is a bit of an oversimplification. It took me quite a while to learn how to handle arbitrary-length input.

    Also, know that, if you want your code to be portable, removing the library dependency will mean that, if you want to port it, you'll have to either a) rewrite all that dirty-work code, or b) add the library back in.

    To end this all on a joke, this is your brain with libraries:

    Pigmaei gigantum humeris impositi plusquam ipsi gigantes vident.
    (If I have seen a little further it is by standing on the shoulders of Giants.)
    --Isaac Newton

    This is your brain without libraries:

    If I have not seen as far as others, it is because giants were standing on my shoulders.
    --Hal Abelson

  • As for holding the data, I'm guessing that a rope'd be the best data structure available: In a simplified form, it's a tree of strings. When you want to print it on-screen, all you should need to do is walk it in pre-order, and print it. Splitting a line in two involves a relatively simple tree op: add a leaf. The hard part would be in splitting the string itself (copy the string, set the old end to 0, add the leaf, add the pointer)...

    Now, there's the issue of tracking the cursor... you could leave a breadcrumb trail to follow from the root to where the cursor currently is.

    There's also thinking about a resizable console... that is, if you want your editor to work whether the terminal is 80 or 200 chars wide...

    Blank Xavier : What happens when you load in a file which is a single line say 10mb in length? do you have a rope with one element which contains a single 10mb string? if so, what happens when you delete the first character from that string? a 10mb memcpy?
    Tordek : I have never actually used a rope; but I'm guessing that you can, first, determine a max length for a leaf string (usually BUFSIZ, which is a good number for anything disk-related), and read a chunk at a time.

Suggestions needed for handling dynamic redirect routes in ASP.Net MVC

I have a number of create and delete partial views that I want to reuse by calling from other views. The issue is that this requires me to pass the return route and routeValues to the create and delete controller methods so that they can generate an appropriate redirect back to the original view on success. I created some extensions and helpers to keep this tidy but it seems convoluted to approach the problem this way. Have I missed something? Is there a simple way to RedirectToAction when the (redirect) controller, action and routeValues can vary?

Example for clarity: both the Product A-Z Index View and the Product SomeCategory Index View have a delete button that calls the Delete View (which displays a "do you really wanna delete" message) which has a "Really Delete" button that posts back to the actual (POST) Delete method in the product controller. Once the product is deleted we need to return a RedirectToAction but since both the 'A-Z Index' and the 'SomeCategory Index' Views have a Delete link we have to dynamically set the action, controller and routeValues to whatever view called the delete initially.

This isn't difficult but it's extremely convoluted to pass the redirect values around all the controllers and views that handle the delete and it stands to reason there must be a saner way to do this.

From stackoverflow
  • did you consider using RedirectToRoute

    RedirectToRoute(new {controller = "MyController", Action = "Create", id = ""});
    
    grenade : It wouldn't solve the problem which is maintaining the controller, action and routeValues of the calling view across the method calls.
  • Consider not using a whole view for the 'delete confirm'. Use a Html helper and a javascript 'confirm()'. ie. render the post form and delete link with a helper so that when the user clicks "delete" they get a js confirm prompting "sure to delete?" and on ok the function "returns true" and invokes the submit on the form to delete. then the delete action simply redirects to wherever it normally would. i'd hope you are using different delete actions for the different objects you are trying to delete. if your plan is to have a generic delete action, well that's harder (and not recommended IMO).

    My delete helper includes lots of things, but the delete part looks like this (with snips):

                string deleteLink = String.Format(@"<a onclick=""deleteRecord({0})"" href='#'>Delete</a><form id='deleteForm' method='post' action='" +
                 routeRelativePath + "/" + actionPrefix + "Delete/" + model.ID +
                @"'></form>", model.ID);
    

    ..and it (the helper) attaches some js too:

        function deleteRecord(recordId) {  
        if(confirm('Are you sure you want to delete this {friendlyModelName}?\nNOTE: There is no Undo.')) {
            // Perform delete  
            var action = "{routeRelativePath}/{actionPrefix}Delete/" + recordId;  
    
            // jQuery non-AJAX POST version
            $("form#deleteForm").submit();
        }
    }
    

    ..you can see that the helper creates the Delete link with all the params for the route and ID etc. The js simply does the 'confirm' part then submits the tiny you can see is created by the helper.

    [sorry if the samples are not 100% complete - i've had to remove lots of things: eg the helper and attached js have many different modes so as to support ajax POSTs etc]

    grenade : Your solution is valid for many situations but in our scenario, deletions require a view as there is a complex dependency tree below the item being deleted and this is displayed on the confirmation view. We also need dynamic redirection support on Create and Edit views that may be called by any number of other views that require a mechanism for returning to the calling view.
  • Interrogate the Request.UrlReferrer in the Delete action ( the one that displays the confirmation view ) and store the referrer details in the temp data.

    In the delete action, read the referrer details back out of the temp data and use the Redirect( string ) overload to redirect to the url that referred the user to the original delete request.

PHP foreach loop through multidimensional array

I have an array:

$arr_nav = array( array( "id" => "apple", 
       "url" => "apple.html",
       "name" => "My Apple" 
     ),
     array( "id" => "orange", 
       "url" => "orange/oranges.html",
       "name" => "View All Oranges",
     ),
     array( "id" => "pear", 
       "url" => "pear.html",
       "name" => "A Pear"
     )  
 );

Which I would like to use a foreach loop to replace (which only allows me to set the number:

for ($row = 0; $row < 5; $row++)

with the ability to display a .first and .last class for the relevant array values

Edit

I would like the data to be echoed as:

<li id="' . $arr_nav[$row]["id"] . '"><a href="' . $v_url_root . $arr_nav[$row]["url"] . '" title="' . $arr_nav[$row]["name"] . '">"' . $arr_nav[$row]["name"] . '</a></li>' . "\r\n";

Many thanks for your quick responses. StackOverflow rocks!

From stackoverflow
  • If you mean the first and last entry of the array when talking about a.first and a.last, it goes like this:

    foreach ($arr_nav as $inner_array) {
        echo reset($inner_array); //apple, orange, pear
        echo end($inner_array); //My Apple, View All Oranges, A Pear
    }
    

    arrays in PHP have an internal pointer which you can manipulate with reset, next, end. Retrieving keys/values works with key and current, but using each might be better in many cases..

  • $last = count($arr_nav) - 1;
    
    foreach ($nav_array as $i => $row)
    {
        $isFirst = ($i == 0);
        $isLast = ($i == $last);
    
        echo ... $row['name'] ... $row['url'] ...;
    }
    
    kitsched : Very simple and practical solution. Thanks.
  • <?php
    $first = reset($arr_nav); // Get the first element
    $last  = end($arr_nav);   // Get the last element
    // Ensure that we have a first element and that it's an array
    if(is_array($first)) { 
       $first['class'] = 'first';
    }
    // Ensure we have a last element and that it differs from the first
    if(is_array($last) && $last !== $first) {
       $last['class'] = 'last';
    }
    

    Now you could just echo the class inside you html-generator. Would probably need some kind of check to ensure that the class is set, or provide a default empty class to the array.

Does the security of Skein as a hash imply the security of Threefish as a block cipher?

The Skein hash proposed for SHA-3 boasts some impressive speed results, which I suspect would be applicable for the Threefish block cipher at its heart - but, if Skein is approved for SHA-3, would this imply that Threefish is considered secure as well? That is, would any vulnerability in Threefish imply a vulnerability in SHA-3? (and thus, a lack of known issues and a general trust in SHA-3 imply the same for Threefish)

From stackoverflow
  • Disregard my previous answer. I misunderstood the relationship between Skein and Threefish. I still don't think Skein being approved absolutely proves Threefish is generally secure (it's possible Threefish is only secure when used in a particular manner), but it would be an indication.

    bdonlan : I don't think that necessarily holds. If the usage of Threefish in Skein is insecure, that would not necessarily imply anything about another usage of Threefish. Moreover, there are a number of proofs about Skein's security in terms of Threefish, but I'm not certain what this implies on a practical level about Threefish's security for encryption applications.
  • Nope. The security of Skein does not imply the security of Threefish. Putting it positively, if someone finds a weakness in Threefish then this does not imply that Skein is also insecure.

    The question however, is quite intersting an applies to other hash functions too. Skein uses a Davis-Meyer construction with some modification. MD5, SHA1 and many other hash functions are also using this Davis-Meyer construction and hence they are in principle based on a block cipher. Just in case of MD5 or SHA1 that block cipher does not have a name and I'm not aware of much research on how suitable these constructs are.

    The requirements for a good block cipher and for a good hash function are different. Somewhat simplified, if E is a block cipher and it is not feasible to find two keys K, K' and two messages M, M' such that EK(M) xor M = EK'(M') xor M' then E is suitable for constructing a hash function using Davis-Meyer. But to be secure as a block cipher E would need other properties. E would have to resist chosen-ciphertext attacks, chosen-plaintext attacks etc.

    Furthermore, if E is a good block cipher then that does also not mean it gives a good hash function. Microsoft had to learn this the hard way with the hash they used in the XBOX. This hash was based on the block cipher TEA that had a weakness that was insignificant for a block cipher, but proved fatal when used for a hash function.

    To be fair, there are some relations between being a good block cipher and being suitable for a hash function. E.g., in both cases differential attacks need to be avoided. Hence some design methods used for construction good block ciphers can be used to construct good hash functions.

    Let me also add that some of the proposals for SHA-3 are based on AES. So far, I haven't seen much support for favoring AES based hash functions, just because AES is already a standard. These hash functions are analyzed just like any other SHA-3 proposal.

    bdonlan : Thanks - that makes a lot of sense now :)

Is there any Java API available to compare two microsoft word documents?

I am looking for a Java API which can compare two microsoft word documents.

We are using Linux server so we can't install Microsoft Word in it.

EDIT :- We want to compare two document and what ever things are not common that we have to highlight with some color or any other way ... So I thing we have to merge both document and highlight content which are not common.

From stackoverflow
  • There is an Apache POI - Java API to do this.

    Example source code is here.

    I found another article doing the same thing in Java, but uses windows COM to do this. If you are using Linux, it suggests using a remote windows machine to do the work. The article contains detailed explanation: Word from Java

    Niyaz : I think you will have to use some other library for comparing the contents. So [A library for reading DOC files + a library for content comparison] will do the work for you.
  • You can have a look at Aspose.Words for Java. It might be able to help you out.

  • Ms Word is not really supported in java.

    you can use poi, but you wont be able to compare everything. COM control is your best chance of doing it(you might be able to use WINE on linux to emulate it).

    I think your best choice is to use RTF files and iText-RTF(in MsWord you can save document as RTF). They have better support, however from my own experiance i can tell you that sometimes they render different in MsWord2003,OpenOffice and MsWord2007. So you should always check that.

    You could also try OpenOffice API(ive never tried it), but there arent many resources out there to tell you how to use it.

  • If its a docx, you could use docx4j (ASL v2).

    See the CompareDocuments example

  • If Office 2007 supports server mode, like OpenOffice does, you could send the stream to a network and process the results back.

    You might be able to achieve what you need it with a recent version of OpenOffice too, using the UNO API.

Fluent NHibernate mapping a joined sub class without the original class file.

I have a library with a class called Recipient which has it's own fluent mapping setup within the library.

Now in another project I have created a new class called SentEmail which inherits from Recipient, I want to be able to create a new mapping class file based on the original Recipient map. If I could update the original ClassMap file I would use

JoinedSubClass("ID", m => MAPPING HERE);

However because I can't adjust the original class map I am stuck as to how I can do this.

There must be another way to skin this cat, if anyone has any ideas they would be much appreciated.

Thanks

UPDATE

Also one thing I forgot to mention part of the details in the new SentEmail model class are stored in a seperate table to the Recipient table.

From stackoverflow
  • If you can't adjust the original mapping at all, then you're out of luck; otherwise you could use the AddPart method to add a separate instance of JoinedSubClassPart.

    An aside: your design sounds a bit peculiar. SentEmail doesn't sound like it should really inherit from Recipient. SentEmail would inherit from Email, or SuccessfulRecipient from Recipient; Recipient and Email are two separate concepts.

    John_ : That is a good point thanks James, I have amended the name to be RecipientEmailHistory as it is more related to the recipient than the email. I've also decided that creating a map for the file was unneccesary because it was a read only class so I create a query and transform it into my new model using NHibernate.Transform.Transformers.AliasToBean. Thanks for your assistance.

memcpy not doing as it is supposed to

I have this bit of code that is outputting the wrong results.

#include <stdio.h>
#include <string.h>

int main() 
{
  unsigned char bytes[4];
  float flt=0;

  bytes[0]=0xde;
  bytes[1]=0xad;
  bytes[2]=0xbe;
  bytes[3]=0xef;

  memcpy( &flt, bytes, 4);

  printf("bytes 0x%x float %e\n", flt, flt);
  return 0;
}

the output that I get is

bytes 0xc0000000 float -2.000001e+00

I am expecting to get

bytes 0xdeadbeef float -6.2598534e+18

edit #1 as was pointed out the endianness could be different which would result in the following

bytes 0xefbeadde float -1.1802469e+29

what I don't understand is the cast from float to unsigned int resulting in 0xc0000000 (the float in the same printf statement being -2.0000 I would attribute to compiler optimization)

this was working before on a different computer. It could be an architecture change.

From stackoverflow
  • You do realise that floats are promoted to double when passed to a variable parameters function like printf()? So when you say:

    printf("bytes 0x%x float %e\n", flt, flt);
    

    you are trying to treat what is really two 8-byte values like two 4-byte values, giving (I believe) undefined behaviour.

  • It is not problem of memcpy.

    1. float is allways converted to double when passed over ... of printf, so you just can't get 4 bytes on most of intel architectures.
    2. when you expacting 0xdeadbeef in this code, you assume that your architecture is BIG endian. There are many little endian architectures, for example Intel x86.
    Tanj : did not realize that it was a printf issue for the hex portion. I don't understand how the float was not cast properly into a double.
    Andomar : Casting float to double is more than shifting four bytes to the right: the conversion makes sure 3.14 in float means 3.14 in double. And after a promotion to double, printf() is passed four bytes, where the %x tells it to expect 2 bytes. So everything after %x is off-base.
  • The "%x" in printf expects an unsigned int. You're giving it a float which is getting automatically converted and that's not what you want. You want to do something like:

    printf("bytes 0x%x float %e\n", *((unsigned int *)&flt), flt);
    

    Oh, and as someone else pointed out, if you're on x86 you're not going to see 0xdeadbeef, more like 0xefbeadde.

  • See if this is any better:

    printf("bytes 0x%x float %e\n", *(int *)&flt, flt);
    
  • To see the parameter promotion, change the declaration from float to double. On my machine, that prints:

    bytes 0xefbeadde float -1.860545e+230
    

    The 0xefbeadde is big-endian for deadbeaf. The last 4 bytes of the double are undefined, so the number displayed after float will vary.

    You mentioned it worked on another computer, what kind of computer was that? Must've been small endian where sizeof(float) == sizeof(double) :)

Is the Amazon SimpleDB WSDL for SOAP without WS-Security correct?

The SimpleDB documentation includes this example request for a ListDomains method. Note that there are Signature, Timestamp, AWSAccessKeyId and Version subelements:

  <SOAP-ENV:Body>
    <ListDomainsRequest xmlns=" http://sdb.amazonaws.com/doc/2007-11-07">
      <Signature>SZf1CHmQnrZbsrC13hCZS061ywsEXAMPLE&lt;</Signature>
      <Timestamp>2009-02-16T17:39:51.000Z</Timestamp>
      <AWSAccessKeyId>1D9FVRAYCP1VJS767E02EXAMPLE</AWSAccessKeyId>
      <Version>2007-11-07</Version>
      <Action>ListDomains</Action>
    </ListDomainsRequest>
  </SOAP-ENV:Body>

The WSDL uses this definition for ListDomains:

<xs:element name="ListDomains">
 <xs:complexType>
  <xs:sequence>
   <xs:element name="MaxNumberOfDomains" type="xs:int" minOccurs="0"/>
   <xs:element name="NextToken" type="xs:string" minOccurs="0"/>
  </xs:sequence>
 </xs:complexType>
</xs:element>
...
<wsdl:operation name="ListDomains">
 <soap:operation soapAction="ListDomains"/>
 <wsdl:input>
  <soap:body use="literal"/>
     </wsdl:input>
     <wsdl:output>
      <soap:body use="literal"/>
     </wsdl:output>
    </wsdl:operation>

The Signature, Timestamp, AWSAccessKeyId and Version information is not in the ListDomains definition.

AWS customer support already has investigated this and says this is as designed:

"The WSDL will continue to cover only application-level elements, as it is a cleaner approach, fitting better with the long-term "SOAP with WS-Security" envelope/body model."

Is the example request correct? Importing the WSDL for example in Delphi does not generate code for the authorization elements.

From stackoverflow
  • Well, it would appear that the authorization elements are indeed not part of the WSDL which is a bit odd....

    Even funnier - the Amazon docs talks about providing that information in the SOAP header - yet, their sample clearly puts it in the <SOAP-ENV:Body> element....

    What happens if you manually add those additional elements either in Delphi code, or in the WSDL itself? Can you tweak it to be so that the SimpleDB service is happy with it?

    Marc

    mjustin : Yes, tweaking is possible but then I will have to apply the changes again everytime when the WSDL changes. I have even found a link to a (old) hacked version of the WSDL in the AWS developer forum - so somebody had the same problem as I. Maybe there is a more elegant solution.

Determine Parent Component

Hi

We have TToolbarButton(s) on a toolbar, each with it's own associated TPopupMenu. The popup menus are all the same so we would like to have only one menus for all the toolbar buttons. The problem I have is determining which ToolbarButton invoked the popup menu.

I've tried the following, but I keep getting an access violation.

...

with (Sender as TPopupMenu) do
  ShowMessage((GetParentComponent as TPopupMenu).Name);

...

Any ideas how to get the parent of the popup menu?

Thanks, Pieter.

From stackoverflow
  • Try

      with (sender as  TPopupMenu)  do
          ShowMessage(PopupComponent.Name);
    

    That should give you the TToolButton that was pressed.

    Pieter van Wyk : Unfortunately PopupComponent.Name returns the Toolbar name and not the ToolbuttonName. It works ok on a ListBox. Pieter.

Why does this query only return results with non-empty child tables?

This is a simplified version of a query we are running where we need to find all rows in the main parent table where the child rows match. The query below returns no results when one of the child tables is empty.

The main table has two child tables:

CREATE TABLE main (id INT PRIMARY KEY, name VARCHAR(8));

CREATE TABLE child1(id INT PRIMARY KEY, main_id int, name VARCHAR(8));
ALTER TABLE child1 add constraint fk_child1_main foreign key (main_id) references main (id);

CREATE TABLE child2(id INT PRIMARY KEY, main_id int, name VARCHAR(8));
ALTER TABLE child2 add constraint fk_child2_main foreign key (main_id) references main (id);

INSERT INTO main (id, name) VALUES (1, 'main');
INSERT INTO child1 (id, main_id, name) VALUES (2, 1, 'child1');

There are no rows in child2 and the following query returns no rows when it is empty:

SELECT
  main.*
FROM
  main
INNER JOIN
  child1
ON
  main.id = child1.main_id
INNER JOIN
  child2
ON
  main.id = child2.main_id
WHERE
  child1.name = 'child1' OR
  child2.name = 'DOES NOT EXIST';

If a row is added to child2, even if it doesn't match the WHERE clause, then the SELECT does return the row in the main table.

INSERT INTO child2 (id, main_id, name) VALUES (4, 1, 'child2');

I've tested this on Derby and SQLite, so this looks to be something general with databases.

Why is this behaving this way?

What can I do to fix it?

I could change to UNION separate SELECTs, but that's much more verbose, and plus, we're generating the SQL dynamically and I'd rather not have to change our code.

Another fix is just to add a dumb row to the database, but that's messy.

PS The main table is a session table in an asset management system that records the assets that clients look up. There are different types of lookups and each kind gets a separate child table, plus there is an attributes child table for key/value pairs for the session that can be searched on.

From stackoverflow
  • When child2 has no rows, the query returns no rows because of the inner join to the child2 table. If you inner join to a table that has no rows, you will never get any results - you would have to outer join to child2 instead if you want to get results when child2 is empty.

    When child2 does have a row, the reason your query returns results is because of the where clause:

    WHERE
      child1.name = 'child1' OR
      child2.name = 'DOES NOT EXIST';
    

    The inner join says there has to be something in child2 with a matching ID, but the where clause has an OR in it, so you will get results just because child1.name = 'child1'. After that, the database doesn't have to bother looking at the child2 tables.

    To fix it:

    I have hunch that you only want to return the child rows when some condition is met. You should outer-join to both of them, and perhaps also move your extra conditions from the where clause to the join clause, like this:

    SELECT
      main.*
    FROM
      main
    LEFT OUTER JOIN
      child1
    ON
      main.id = child1.main_id
      AND child1.name = 'child1'
    LEFT OUTER JOIN
      child2
    ON
      main.id = child2.main_id
      AND child2.name = 'whatever'
    
    • The outer joins mean you have the chance of getting results even if one table is empty.

    • Moving the extra conditions (child1.name = ...) from the WHERE clause to the outer join means you only get the tables info if the condition is true. (I think this might be what you are trying to do, but maybe not, in which case leave the conditions in the WHERE clause where you originally had them.)

  • It's returning nothing because you are using inner joins.

    Change your inner joins to left joins

    Blair Zajac : Then why does adding a non-matching row to child2 change the result of the query? According to this statement, even after adding a non-matching row to child2, the query should still return no rows.
    Blair Zajac : Nevermind, I see that it's the main.id = child2.main_id that was preventing the query from returning any results, even if child2.name didn't match. I was just ignoring this part of the query.
  • When you say INNER JOIN you are asking the query to return rows that have results on both sides of the join. This means any rows that do not have matching child rows will be removed.

    It sounds like what you are looking for is LEFT JOIN which will include all rows on the left hand side of the join (main) even if they do not have a matching entry on the right hand side (child1, child2).

    This is standard behaviour and a very common problem for people not familiar with SQL. Wikipedia has all the details, otherwise a quick Google search brings up plenty of results.

    Blair Zajac : Then why does adding a non-matching row to child2 change the result of the query? According to this statement, even after adding a non-matching row to child2, the query should still return no rows.
    Blair Zajac : Nevermind, I see that it's the main.id = child2.main_id that was preventing the query from returning any results, even if child2.name didn't match. I was just ignoring this part of the query.