The original address: towardsdatascience.com/kotlin-the-…

Hinchman-amanda.medium.com/

Published: October 11, 2018 -6 minutes to read

Today I wrote a follow-up to my talk at the KotlinConf with names like Kotlin, TornadoFX and meta programming. In the lecture, I came up with a general definition of cross cutting and explored common forms of metaprogramming. We are exploring the TornadoFX in Kotlin to explore the features of the language. In retrospect, I wish I could focus more on talking about my applied research, so I’m here to talk about the road I’ve traveled and where I’m going!

Just to recap

There are three common forms of metaprogramming.

  • Wizards – also known as “monkey patching,” wizards are usually user-generated input that is limited to markup languages such as HTML or XML. Android Studio’s design capabilities are a good example of a wizard. The wizard’s error-prone input to the user makes it the lowest form of metaprogramming.
  • Aspect-oriented Programming –AOP is an idea in which you pull concerns out into a higher level of abstraction in order to weave other concerns into a more compatible co-existence. AOP is generally limited to reporting metadata and lacks dynamic attributes.
  • Domain-specific Languages — A traditional DSL is a language that does a specific task, but gives up things that are irrelevant to that particular class. You can manipulate data sets with SQL, but you can’t create entire applications with them. Because DSLS are often different from general-purpose languages like Java, C, and Kotlin, the result is that DSLS are often stored in different files or string literals, causing a lot of overhead.

Coincidentally, Kotlin includes the capabilities of all three forms, even improving on the current shortcomings of each type as a functional paradigm. With internal DSLS and refactoring generics, Kotlin found a way to overcome limitations in the JVM stack that prevent functional pillars like functions from becoming first-class citizens.

Kotlin switches between functional and object-oriented paradigms, but as a statically typed language, there are some identified challenges in trying to accomplish true metaprogramming that other statically typed languages cannot.

  1. True metaprogramming does not respect encapsulation.
  2. True metaprogramming produces code that does not compile, so there is no type checking.

Having said that, I tried something I wasn’t quite sure if it was true, true, brave or really, really, stupid.

After all, it may not be stupid.

Tests under development

Software quality assurance includes a means of monitoring software engineering processes and methods used to ensure quality. Of course, part of the process is writing tests for the written code and ensuring that the product works as expected from the user’s perspective.

I was a little ashamed when I realized I needed UI testing, having never been trained in JUnit testing myself. But then I realized that there are a lot of other engineers who shy away from testing, and that in practice, many strategies end up being more troublesome than regression testing helps.

About a year ago, I wrote a programmatic guide for salespeople to generate live presentations for potential customers because, well, I was lazy. Coincidentally, this project is a form of monkey tinkering!

Tornadofx-dnd-tilesfx is open source here

I ended up writing my own drag-and-drop feature because I didn’t know how to serialize my custom objects; The problem was that I had trouble debugging my micromanagement events, and I realized THAT I needed to test the user interface. Another question? I don’t know how to write UI tests. I thought — what if I used metaprogramming to generate these tests for me?

So I started asking other developers and QA engineers.

  • What is a good test?
  • What is a bad test?

I received hundreds of answers — from different frameworks to different strategies to different styles.

TornadoFX – Suite. It’s not just applied research

Tornadofx-suite will initially just generate the TornadoFX UI test. But it’s more than that. If this project can be used by multiple people for multiple frameworks — can we collect this data and use machine learning to find these answers?

Let’s see what the program looks like now, and what it does.

Tornadofx-suite is open source and will be found here

You’ll notice that this application detects UI input — from there, KScripting will be implemented to generate these tests.

Detecting UI input

This has been the most difficult hurdle for the project to overcome so far. Using Kastree (a wrapper for the Kotlin compiler), I was able to decompose the Kotlin files I scanned into abstract syntax trees (AST). The reason I use AST is that we can keep code analysis unknowable for use in future frameworks. In creating the required mappings, I ran into a real life metaprogramming challenge — in parsing, it’s hard to recursively map potentially infinite trees (both height and breadth) when your static system cares about the types you have to cast.

Finally, there was an established challenge that true metaprogramming and statically typed languages (like @Kotlin) could not play together — being able to recursively check AST decomposition without caring about the type system.

In particular, decomposing attributes for the Kotlin language proved difficult. An attribute may be a collection, a variable, a class-independent function, or a member. Each of these items may have further classification in terms of access level, type, and may contain additional attributes within these attributes.

I’m not saying this is the best solution. I’m not even saying it’s a good plan. But I found one! Translating these asts into JSON objects/arrays makes casting significantly easier to recurse in a more unknowable way.

    private fun detectLambdaControls(node: JsonObject, className: String) {
        val root = node.get("expr")

        if (root.asJsonObject.get("lambda") != null) {
            val rootName = root.asJsonObject
                    .get("expr").asJsonObject
                    .get("name").asString

            // TornadoFX specific
            addControls<INPUTS>(rootName, className)

            // get elements in lambda
            val lambda = root.asJsonObject.get("lambda").asJsonObject
            val elements: JsonArray = lambda.get("func").asJsonObject
                    .get("block").asJsonObject
                    .get("stmts") as JsonArray

            elements.forEach {
                detectLambdaControls(it.asJsonObject, className)
            }
        }
    }
Copy the code

Poor performance? Of course it is. Is this a bad implementation? Yes, it is. But does it work? If it works, it’s not stupid.

So, we have these subdivisions. How do I know which elements I should care about? It’s really easy to customize each framework with Kotlin enumeration classes.

enum class INPUTS {
    Form, TextField, DateField, Button, Action,
    RadioButton, ToggleButton, ComboButton, Checkbox,
    Item, Paginator, PasswordField
}
Copy the code

All right, we have some keywords to watch out for. We can use our enumerated classes to loop through our collection of control properties.

private inline fun <reified T : Enum<T>> addControls(control: String, className: String) {
        enumValues<T>().forEach { it ->
            // if a control is a control we care about                    
            if (control == (it.name)) {   
                if (detectedViewControls.containsKey(className)) {
                    ... // add control to existing class collection
                } else{...// create a new class collection and add control to it}}}}Copy the code

This is POC, and we can detect the required controls in any framework. We’re working on creating an agnostic parsing system that will eventually, when it’s big enough to handle such responsibilities, exist in a library of its own.

What’s next?

Action is what affects the environment.

That’s what testing does.

Testing is an active process of exploring the environment and determining how it responds to various conditions.

Testers learn from these interactions.

So, if we’re talking about a generative machine learning tool that “tests,” by definition, it has to learn. It must do this by taking action and being able to deal with feedback from those actions. We can start by creating generated permutations of user input that can cover situations that humans might never be able to cover, but then we’ll add a lot of time to testing. What if we reduced these permutations and combinations to worthwhile tests?

The natural progression from metaprogramming is machine learning.

We can collect data from users — what tests were created for, what tests initially passed/failed, and how many users contributed to the project? From there, it may be possible to detect components that are known to have problems and provide them with smarter tests. We can even find some data on what makes a good UI/UX. I don’t even think I’ve scratched the surface here.

Are you interested in making a contribution? I do a lot of pleasantries on Kotlin Slack, and of course I can post on Twitter and Github. The project can be found here.


Translation via www.DeepL.com/Translator (free version)