Topic on Extension talk:OdbcDatabase/Flow

Aggregate functions seem to fail

6
Lbillett (talkcontribs)

Perhaps similar to the issues with specifying limit I also bumped into an issue when trying to return (with ED) the result of a COUNT.

Function: EDUtils::searchDB
Error: 37000 [unixODBC][FreeTDS][SQL Server]The cursor was not declared. (myodbcname)

This querying an MSSQL server through odbc.

A slightly different error than when trying to specify limit (top=)

Function: EDUtils::searchDB
Error: 37000 [unixODBC][FreeTDS][SQL Server]Incorrect syntax near 'LIMIT'. (myodbcname)

The same queries against a mysql server with similar tables works as expected. Discussion over at ED suggested it's likely this connector. Any way to work around this?

Thanks!

Chiefgeek157 (talkcontribs)

Great question. I am unsure of the answer as I (*ahem*) don't use my own extension any more on our wiki. I did some Googling, but did not turn up much yet.

Lbillett (talkcontribs)

Ah, ok. Curious, did you find an alternative solution? or did the need go away? The whole php to MSSQL under linux has really been a bit troublesome the last few years. The one incentive I know of to move to windows.

Chiefgeek157 (talkcontribs)

We solved the problem a different way. I have a .NET application with SQL Server DB that has information I wanted to display in the wiki. At first I created pages for each "thing" in the wiki, then did a DB query using OdbcDatasource back to SQL Server to get the details for that "thing" and display them on the wiki page. This meant, among other things, that page loads required a DB call every time, AND the info returned by the query was not part of the wiki page so was not searchable in the wiki.

We chose to reverse the process and have the .NET app, through a service, create, update, or delete wiki pages for each "thing" automatically as the data in the .NET app changed. We now synchronize several thousand wiki pages from external sources using this method. The external updaters only manage a portion of the page (essentially the main template on the page), so users can edit the rest of the page with additional supporting info--which was the whole reason to do this in the first place. User editing the page source see a comment-delimited block warning them not to edit the contents between the comments because it will eventually be overwritten.

Now all the external info is embedded in the wiki page and is fully searchable in the wiki.

It has worked wonderfully.

Lbillett (talkcontribs)

VERY interesting. Since my information is basically numbers/dates, leaving it on the server is better for this particular application, but just the idea that you are leveraging automatic article edits (which I've only begun to explore) to your wiki to give you both access to controlled information AND the ability for your users to contribute to it in one spot gives me so many ideas!! Thanks!

Chiefgeek157 (talkcontribs)

To add to your mental soup, we went further by creating a special template in the wiki that represented the pattern for retrieving data from the .NET entity model. This allows me to write the sync engine in .NET once and control what data gets retrieved using a wiki page.

So for a given entity in my DB (say a widget), I have two templates. The first is Template:Widget that gets placed on every page that represents a widget, and which has parameters for each piece of information I want to get from my DB.

The second is "Template:Widget (DB)", which contains what looks like a template (so it doesn't mess up the wiki), but the parameter references are really references to fields on the entity itself (e,g. { { {Description}}}, { { {CreatedTime}}}, { { {Aliases.Name}}}). The engine knows how to use reflection in C# to get the values of those fields and stuff them into the text of the wiki page as values of the template parameters, handling lists and nested fields. It works out very well. If I want to retrieve different data from the DB, I just change the DB template to pull other fields and stuff them into the right places in the page's template.

The DB template also has special control fields that tell the engine how to locate the correct wiki page given a DB identifier. The engine performs a Semantic MediaWiki ASK query to find all pages in list of categories with a list of property/value pairs. If no pages come back, the page is created. If one page comes back it is updated. If more than one page comes back it is an error.

Reply to "Aggregate functions seem to fail"