12 Apr 2024

Some thoughts on the language server and its usefulness in the roobuilder

Since adding the vala language server to roobuilder I disabled quite a bit of the compiler code that was in the existing code base primarily removing the code copied from libvala which reduced the long-term maintenance issues
The initial edition of the language server provided the error reporting both for syntax errors that prevented compiles but also warnings and deprecated warnings.
These were nicely integrated into both the editor and the node navigation of the UI builder. This didn't really add a huge change to the existing usability as we already had error reporting like this before however the way that the language server reported errors was far nicer than the original code that we had

Highlighting a warning in the editor
Highlighting a warning in the editor
Notably to get the language server working we had to switch to using meson as the build tool. This after working out how to use it proved a major improvement over both autoconf and cmake that I previously been using on other projects. It also made it quite easy to bundle resources like images and data into the compile which I had avoided before. Notably however it did not seem to solve the issue that glib has a settings library which appears to be almost unusable due to the the way it requires things to be done at compile time which would not really be occurring if you installed via packaging.
Completion provider mostly working but a bit slow
Completion provider mostly working but a bit slow
The next stage to using the language server was to add the completion engine in. The biggest issues around the completion engine appear to be the requirement that the code is being edited is the same inside the language server as that being edited in the editor.
This means continually passing an up to date version of the text to the language server. The language server then running a compile from there it can work out type of the symbol that is being completed and pass back suggestions. On anything other than a single file or very tiny project this compile can take a few seconds before the completion engine is able to return a list of suggestions.
The end result of this is that the completion pop up frequently appears a minute or so after you finish typing or after you have jumped to another window and it pops up floating in space. It's quite cool for an example but doesn't really seem to have much practical use due to these restrictions. It is also notable that completion suggestions only really work well if you know what method or property you wanted to call or reference. On any kind of large library quite frequently you are not quite sure what that property or method might be so looking up the documentation is frequently the result.
To solve that I started looking at the hover feature of the language server. Initially implementing it as a mouse over hover, this however suffered from the similar problem that the completion engine had, that the round trip back and forth between the server and the editor had enough of a lag that you really have to wait for the hover to appear. So rather than using the hover method for mouse over, I had to go using it to create a context bar at the top of the editor where there was a little white space left over to be used. This seemed like it would actually be a quite useful feature the language server returns quite a good signature of what it is hovering over with things like the type of the symbol eg. the object name or the method and the parameters that might be called.

I'm thinking that this might be useful. I looked at ways in which having got this context menu it was possible to look up the documentation of the object for example all the properties and methods from the language server based on sending it that object type for example the method that was being selected.
hover provider providing a context menu at the top of the code
hover provider providing a context menu at the top of the code
This is where I started to run into the limitations of the language server. It has some features to search for symbol types but just the simple idea of saying give me all the information you have on this type I couldn't really find any method in the specification to give me that info. It also appears that the number of methods that actually return documentation are quite limited within the specification, so the only ones I initially worked out was hover and completion.
code navigation on the right of the plaintext editor
code navigation on the right of the plaintext editor
One thing I did look at was the API to return document symbols, this after a bit of hunting can return a tree of symbols within a document which is extremely handy as a navigational tool to jump to code within a class. That was added as a right hand navigation bar on the plain text editing windows. It always amuses me to see editors with what I think they call mini maps which look like a zoomed out version of a file which seems absolutely pointless as a navigational tool. This tree however shows the methods and properties of the class of the file being edited and I think I made the mouse over show types etc. this however did not really solve the issue as this document symbol feature does not return any help documentation as per the API.
In my usual disorganized plan for the editor and my constant fascination with languages I decided to spend a bit of effort looking into this in more detail. The first step I decided to look at was how the language server was actually extracting the data from both the vapi and gir files. Previously in the original design we fetch quite a bit of the structural data from the vapi to fill in all of the properties for the gtk widgets. This part of the code had a long history as I actually started with the gir files when it was written for the JavaScript seed engine.
That part of code had evolved over time to not actually use much of the gir however it was all structurally named around that and the symbol management which also wrapped the JavaScript user interface builder all shared the same structure.
Looking at going forward with this and having the ability to properly query any type of object whether it is part of the library or part of the code base required the ability to use the same way that the language server extracted symbols from the library. The first step was really just to build a proof of concept called from the command line to convert vapi files and the code base into a tree structure. I decided to create a new base class which really just stores these type data and extend this class to handle the various other types like vala or eventually the JavaScript library. And also ended up being wrapped to handle gir files which I will come to later.
Having done this it became pretty clear early on that using this code in line while things are being edited is going to have the same problem that the language server currently has in that the passing really needs to be done in the background and quickly. So it obviously became clear that the whole passing really should be done in a thread. This however leads to the interesting problem that if you are compiling in a separate thread you then need somehow to pass we compile data back to the original thread.
Having worked with SQLite before I had a suspicion this might be a better solution. All of the file structures that we are currently using like gir and vapi, have a tree like structure and the existing data we have storing them in in the previous iteration of the passing engine which was still being used for the properties look like you could really just be mapped into a single table. I also realized that SQLite appears to work with threads although this bit I still have not tested completely. Primarily as we are using memory based databases that I'm guessing will work between the threads.
Phase one was parse the code base with the vapi and store it in the SQL database, initially after a vapi file is being passed there is no need to update the database as that file has not changed in general that part of the engine would not update symbols in the database for files that have not changed. This makes an assumption that the symbol types of files are actively being edited do not affect the symbol database of other files in the project. I still need to understand whether that is really going to be the case but in general I think it is.
Phase two wants to look at the GIR files, these files are still useful as they contain documentation for the object libraries. There is however an issue which needs to be dealt with which is that although generally they map directly to vala there are quite a few instances where the way Vala has wrapped the libraries diverges from the gir files. My initial hope was to use the libvala code in a similar way to the language server to extract the documentation from these files. That however proved to be a bit of a fool's game. Technically within the language server you can actually compile a valid project against these gir files to do bindings in some scenarios. That actual usage however is pretty minimal and to be honest those files are more useful just as a data set for the documentation. And since they're relatively static it would be kind of helpful to just scan the whole lot. Stored in the database and use it when needed rather than specifically scan gir files as needed.
Since the language server uses them in a similar way to the compiler it didn't really like having multiple versions eg. gtk 3 and gtk 4 read in and expect to spit out documentation for both versions hence some really old code that I'd written to handle imports was grabbed to just scan very quickly through these XML files and extract the symbol names and documentation along with the file name version that the code is associated with.
The last part that's been currently done is to wrap this gir extraction into the startup process as a background thread to when you load the builder.
The next steps in theory are to hook in the code compiler to go as a background process when the code is being edited. This probably needs to look a bit better than the current language server interaction, whereby the compilation process needs to be canceled if a new compilation is required, and also the compilation needs to wait until editing has reasonably being completed rather than starting on the first change, which I managed to do with a little trickery on timeouts and asynchronous checks before.
This approach also had an interesting relevance to another significant issue with the editor. Some of the JavaScript user interface files actually contain a huge tree which when edited does cause significance performance issues. Part of these issues are the re-rendering in webkit which can be turned off .. however I think the other issue is that due to the nature of the renderer and the need to store references to the outputted line that maps to each node and property the JavaScript engine doesn't really cache in any sensible way the conversion of the node tree into a file and the data mapping of line to properties. In theory some of this could be speeded up by using more sensible methods to store the line numbers using more relative than absolute numbers. However it does raise the issue that using a similar concept of background threading the rendering of the tree into a string and using the database to store the line numbers means that the background thread generated data could be used by the foreground thread about too many issues. But as usual with the editor design far too many ideas and not enough time.






Posted by in | Add / View Comments()

19 Nov 2023

Roo Builder for Gtk4 moving forward

The gtk4 port of my roobuilder is getting closer to a release.. well, good enough that it will replace the existing gtk3 version.

Porting has been quite an effort. The initial phase of switching to gtk4 libraries and fixing all the compiler errors took a few months (this is very much a pet, part time project.. so things take a while).



After having reached the point where there were no compiler errors (although it still has plenty of warnings). The next step was to see if it ran.. which obviously it failed at badly.

Its been a long road, and it started with issues around how gtk4 windows are more deeply tied to the application class. Been so long since i fixed that, so ive forgotten the details. In the gtk3 version, although it had a application class, it did not really do that much.

More recently, though, I've been going through the interface migration. Key to this has been the migration away from gtktreeview, which seems to be unusable now for drag and drop of outside elements, along with being depricated. So i had to migrate all the code to use columnview with treemodels.



On the positive side the use of an array of objects as the storage for trees, and the new method to render cells is a massive improvement on gtk3. And works like magic with vala objects. Especially clever is the methods to update cell content. Which you can create get/set properties on the vala object. And any change to the property instantly updates label text



This makes the code that manages node tree, the core to the ui builder, massively simpler. No need to keep calling refresh or deal with tree iteraters like gtk3 treeview.

this.el.bind.connect( (listitem) => { 
  var lb = (Gtk.Label) ((Gtk.ListItem)listitem).get_child();
  var item = (JsRender.NodeProp) ((Gtk.ListItem)listitem).get_item();
  item.bind_property("to_display_name_prop", lb, "label", GLib.BindingFlags.SYNC_CREATE);
});

The method for sorting these view is also nothing short of magical, when you finally find the code example for sorting.. its very easy.. but hunting down a good sample was difficult.

this.el.set_sorter( new Gtk.StringSorter(
  new Gtk.PropertyExpression(typeof(JsRender.NodeProp), null, "name")
));
// along with this (in the sorter
this.el.set_sorter(new Gtk.TreeListRowSorter(_this.view.el.sorter));
 


The process had not been without difficulty though, the new widgets seriously lack the ability to convert click events into cell row/column detection. Essential for drag drop, The only way to do it is to iterate through the child and siblings and use math to calculate which row was selected. This made more complicated as the recent update to the widget changed the structure of the widgets. Breaking all row detection code. Note to self... don't complain.. send a patch...

(double x,  double y, out string pos) {

	
 // from  https://discourse.gnome.org/t/gtk4-finding-a-row-data-on-gtkcolumnview/8465
	GLib.debug("getRowAt");
    var  child = this.el.get_first_child(); 
	Gtk.Allocation alloc = { 0, 0, 0, 0 };
	var line_no = -1; 
	var reading_header = true;
	var curr_y = 0;
	var header_height  = 0;
	pos = "over";
	
	while (child != null) {
		//GLib.debug("Got %s", child.get_type().name());
	    if (reading_header) {
		   
		    if (child.get_type().name() == "GtkColumnViewRowWidget") {
		        child.get_allocation(out alloc);
		    }
			if (child.get_type().name() != "GtkColumnListView") {
				child = child.get_next_sibling();
				continue;
			}
			child = child.get_first_child(); 
			header_height = alloc.y + alloc.height;
			curr_y = header_height; 
			reading_header = false;
        }
	    if (child.get_type().name() != "GtkColumnViewRowWidget") {
		    child = child.get_next_sibling();
		    continue;
	    }
	    line_no++;

		child.get_allocation(out alloc);
		//GLib.debug("got cell xy = %d,%d  w,h= %d,%d", alloc.x, alloc.y, alloc.width, alloc.height);

	    if (y > curr_y && y <= header_height + alloc.height + alloc.y ) {
	    	if (y > (header_height + alloc.y + (alloc.height * 0.8))) {
	    		pos = "below";
    		} else if (y > (header_height + alloc.y + (alloc.height * 0.2))) {
    			pos = "over";
			} else {
				pos = "above";
			}
	    	GLib.debug("getRowAt return : %d, %s", line_no, pos);
		    return line_no;
	    }
	    curr_y = header_height + alloc.height + alloc.y;

	    if (curr_y > y) {
	    //    return -1;
        }
        child = child.get_next_sibling(); 
	}
    return -1;

}


The other issue is that double clicking on the cells is a bit haphazard. Sometimes it triggers.. other times you feel like you are pressing a lift button multiple times hoping it will come faster.

https://gitlab.gnome.org/GNOME/gtk/-/issues4364

The other bug i managed to find was putting dropdowns on popovers. Don't do this it will hang the application. At present I've used a small column view to replace it.. but it needs a better solution as the interface is very confusing.

https://gitlab.gnome.org/GNOME/gtk/-/issues/5568

The other hill that I've yet to climb is context menus. Gtk4’s menu system is heavily focused on application menus. And they have dropped menuitems completely.



Context menus are usually closely related to the widget that triggers them, and sending a signal to the application, that then sends it back to the widget, seems very unnatural. For the time being I've ended up with popovers with buttons.. not perfect, but usable

I also had the opportunity to change the object tree adding for objects that ar properties of the parent.

Previously, adding a model to a columnview was done by using the add child + next to the columview item in the tree. It wouls show a list of potential child object, including ones that are properties.

Which is only working for the Roo Library so far



I've removed those from that object list now, and put them in the properties dialog, which then lists all the potential implementations of the property, if you expand it the list, and double click to add it to the tree

Anyway, now to work out some good demo videos


Posted by in | Add / View Comments()

05 Mar 2020

Clustered Web Applications - Mysql and File replication

In mid-2018, one of our clients asked if we could improve the reliability of their web applications. The system was developed by us and was hosted on a single server in Hong Kong. Over the last 5 years or so, the server had been sporadically unavailable due to various reasons


  • DDOS attack on the Hosting provider's network
  • Hardware failure - both on the hosting machine and the provider's network hardware.
  • Disk capacity issues


While most of these had been dealt with reasonably promptly, the service provided by our client to their customers had been down for periods up to a day. So we started the investigation into the solution to make this redundant and considerably more reliable.


Since this was not a financial institution, with endless money to throw at the problem, Amazon, Azure etc. were considered to pricey, and even if they did provide a more reliable solution, there was still a chance that it could still be susceptible to network or DDOS attacks. So the approach we took was to build a cluster of reasonably priced servers (both physical and virtual) hosted at multiple hosting providers.




This represented the starting point, we had already separated the Application and Mysql server into individual containers. Which made backups and restoration trivial, along with theoretically making the cluster implementation somewhat simpler


To implement a full clustering solution, not a redundancy solution, we needed to solve a few issues


  • Mysql Clustering
  • File system Clustering
  • Load Balancing
  • Private Networking between the various components.


The simplest of these was the Load balancing, we had already been using Cloudflare to provide free SSL (we tend to use letsencrypt on solutions these days, but Cloudflare has proved reasonably resilient. although it does still result in a single point of failure from our perspective)


The other two however proved to be more challenging than we expected.


Mysql Clustering


Anyone who has used MySQL, normally at some point set's up a master/slave backup system. It's pretty reliable, however, when it comes to switching from the master/slave, we concluded that the effort involved, especially considering the size of our database would be problematic. So we started testing out the Mysql Clustering technologies (note we tended to stick to classic MySQL technologies, rather than trying out any of the forks/offshoots). 


After our initial analysis, we settled on NDB clustering, the setup of which proved more than a little problematic. In part due to the database restrictions that the storage engine enforced, but eventually having overcome the initial issues with this, by modifying our schemas slightly, we discovered that in our usage scenario, that NDB performance was significantly slower than that of a standalone InnoDB server. To the point where the application became un-usable. This may have been due to various factors, memory limitations, one of the machines using a physical rather than SSD drive. But after many hours of research and testing, we concluded that it was not a viable solution.


After throwing all that research in the bin, the next alternative was an InnoDB cluster. Again this involved quite a learning curve as management of the cluster is done via mysqlsh, which due to the nature of the internet has a wealth of out of date contradicting information all over the internet. Along with rather limited precise information on working configurations. Eventually, we managed to solve both the multitude of configuration settings (enough memory allocated to migrate) and minor schema modifications to enable replication to work. Resulting in the first part of the puzzle being solved.


The final solution for the mysql server involved hosting on 1 physical machine, one virtual machine in Hong Kong and a Linode VPS in Singapore. This has generally met the initial goals of more stability, however, we do have a long term plan to move more to Linode, and remove the Hong Kong physical hardware, as this seems to be our most frequent point of failure. Saying that the machine and network have failed multiple times, but the services have remained up throughout.


In addition to the servers, we also added mysqlrouter to the mix, in the initial design it's running on the same container as the mysql server. in hindsight, it would have been better to have a separate container for this, and the next phase the mysql servers will be hosted on seperate VPS's, and the mysqlrouter container will be running on the application server VPS's.


File Replication


We did some quite extensive testing of clustered file systems, including getting the application up and running on gluster. This again however proved to be a performance issue, and we found that gluster killed both CPU and memory usage. 


Eventually, we settled on a multi-pronged approach, the first being unison for two way synchronization. The second being splitting the file system into 'active areas' and archive areas. Our applications generally create files in directories based on YYYY/mm/dd - so a simple script was written to move directories older than a few days from the 'hot' storage area which was replicated using unison (based on inotify watches) and a 'cold' area, that was kept in sync daily using rsync. Softlinks were then created the hot file areas to point to the correct place in the cold storage.


This meant we could handle quite a bit of file activity as one of the applications is constantly creating files, and have those files available on multiple servers. For the next phase of development, we will be running unison in multiple containers for each pair of replication targets. And also considering NFS servers over TCP rather than replication for our main two front end servers.


Private Networking


One of the early issues before we set this all up was to work out how all these different servers would communicate, securely with each other. Normally for private networking, we had used OpenVPN. This is a client-server spoke system, however for a reliable network we would not want to have a single point of failure, and writing scripts to flip between different OpenVPN servers if something failed seemed rather messy.


To solve this we came across tinc, which solved our redundancy problem brilliantly. Tinc is a mesh VPN, which, in theory, can route around broken connections, so with servers A,B,C - if the line is down between C&A then it will route via B. It, as we found later does not handle a 'poor' (dropped packets) connection between C&A very well. You also have to make sure all the firewalls are correctly configured as if you incorrectly configure access to 'C&B', in that 'C&B' can see A, but A can connect directly to C&B, the network will work, however, will fall apart as soon as C goes down. It's a real, cross the t's and dot the I's network, get it correct otherwise when it fails you will be hunting down the issue for a while.


This is a map of the current configuration



Posted by in | Add / View Comments()

03 Jan 2019

GitLive - Branching - Merging

As things have slowed down in the new year, I've decided to give this blog a sparkle of life briefly. So if you are interested in engaging our services feel free to send us a message. As we have spare capacity at present.
Almost 9 years ago, I created a little application called gitlive, it's aim was to replicate our old subversion environment, where we mounted the subversion server over webdav, and whenever we saved files, they where automatically committed to the revision control system.
Posted by in | Add / View Comments()

28 Oct 2016

PDO_DataObject Released

Coding was complete last month, and has a huge test suite to covering a large proportion of the features. This should mean that replacing DB_DataObjects will be pretty easy.
You can either just checkout the code from github  / PDO_DataObject , or use the pear channel 
#pear channel-discover roojs.github.com/pear-channel
#pear install roojs/PDO_DataObject-0.0.1 

Documentation

I revived my old PHP_CodeDoc code  (That needs publishing). It seemed simpler than trying to use any of the other tools out there. It's a pretty simple tool to extract structure, and documentation comments from the PHP source code. I added a small amount of code to export to our 'Roo UI bjs toolkit format' 
The generated files are pure JSON, and mostly contain the contents from the comments un-formatted. I decided that doing the Markdown conversion in JavaScript was far simpler (I refactored https://github.com/chjj/marked slightly for use with our libraries)
There are a few other tweaks I made, using `@category` to group the documentation, and writing category pages (using roobuilder), then putting it all together the index.js file loads the parts, and renders the manual.
This week I finished tidying up the rendering on mobile, and making sure all the comments render nicely using markdown. The result should be a nice easy to read and use manual.

Posted by in PHP | Add / View Comments()

17 Aug 2016

PDO_DataObject is under way

Work has started on revamping my PEAR package DB_DataObject, While it's served well over the years, and I still use it every day.. We have been funded to create a new version, which runs on PDO.

There is a Migration plan in the github repo for PDO_DataObject, I have currently completed the first two blocks, and almost the third block. But the key features are
  • General Compatibility to DB_DataObject with a few exceptions -  methods relating to PEAR::DB have been removed, and replaced with PDO calls
  • New simpler configuration methods, with some error checking
  • A complete test suite - which we will apply to DB_DataObject to ensure compatibility
  • Chaining for most methods so this works
$data = PDO_DataObject::Factory('mytable') ->autoJoin() ->where("somevalue not like 'fred%'") ->limit(100) ->fetchAll();
  • Exceptions by default (PEAR is an optional dependency - not required)
  • It should be FAST!!! - standard operations should require ZERO other classes - so no loading up a complex set of support classes.  (odd or exotic features will be moved to secondary classes)
Feel free to watch the repo (we are using auto commit, so the commits are pretty meaningless at present)

Posted by in PHP | Add / View Comments()

19 Nov 2015

Mass email Marketing and anti-spam - some of the how-to..

I'm sure I've mentioned on this blog (probably a few years ago), that we spent about a year developing a very good anti-spam tool. The basis of which was using a huge number of mysql stored procedures to process email as it is accepted and forwarded using an exim mail server.

The tricks that it uses are numerous, although generally come from best practices these days.

The whole process starts off with creating a database with

  • 'known' servers it has talked to before 
  • 'known' domains it has dealt with before.
  • 'known' email address it has dealt with before.


If an email / server / domain combo is new and not seen before, then apart from greylisting, and delaying the socket connections we also have a optional manual approval process. (using the web client).

Moving on from that we have a number of other tricks, usually involving detecting url links in the email and seeing if any of the email messages that have been greylisted (with different 'from') are also using that url.

On top of this, is a Web User interface to manage the flow and approvals of email. You can see what is in the greylist queue, set up different accounts for different levels of protection (either post-delivery approval, or pre-delivery approval etc..)

This whole system is very effective, when set up correctly. It can produce zero false negatives, and after learning for a while, is pretty transparent to the operations of a company. (email me if you want to get a quote for it, it's not that expensive...)

So after having created the best of breed anti-spam system, in typical fashion, we get asked to solve the other end.. getting large amounts emails delivered to mailing lists.

If you are looking for help with your mass email marketing systems, don't hesitate to contact us sales@roojs.com

Read on to find out how we send out far to many emails (legally and efficiently)

16 Nov 2015

Hydra - Recruitment done right

For the last few months we have been finishing up the first round of work on the Hydra Jobs platform. Something, along with the founders we think is quite revolutionary idea in IT recruitment. 

Key to it's design is the idea that the first step in finding someone is not putting up an advert, and expecting a shitstorm of resume's that are totally unconnected to the requirements. Taking a step back and realizing that as an employer, you would rather do a search for all the available candidates, than risk the time and wasted effort in sorting though unrelated piles of CV's.

We have spent the last 9 months working to get this to a MVP. The platform is now running, and the business operations are now underway.

So to make this work, the first step on Hydra was to design a set of Questions that could enable a detail search to work. What we ended up with is probably the easiest, yet comprehensive way of entering your profile data so it can be matched efficiently with companies recruiting staff.

It has been an interesting few months getting Hydra up and going, now we are over the hump of the work, we are looking for more interesting projects to take on, so if you know of any, please contact us.

Read on for some of the tricks we used to make this project, one of the best recruitment platforms around.

20 May 2015

More on syntax checking vala - and a nice video

As I wrote last week. I had added full syntax checking to the editor. So it runs a full compile check as you type.
Here's a nice video of it working...

After the initial joy of adding this to code, I soon realized it had a fatal flaw, read on to find out more..

Posted by in Gtk | Add / View Comments()

09 May 2015

Fetching Resources from github in the App Builder and fake web servers

My final words this week on the builder - handling resources, and fake web servers

While I talked in the other posts about how the builder extracts the API for various components from the libvala library and the vapi files, some information that the builder requires has to be manually, created or fetched from other locations.

When the Builder was written in seed, it basically looked at the source code directory, and read files relative to the source code. For the Vala version however, it's not expected to know about the source code directory, so I had to use a different approach.

Posted by in Gtk | Add / View Comments()
   (Page 1 of 24, totalling 233 entries)    next page »

Follow us on