juan_gandhi: (Default)
In Boolean logic, ¬(A∧B) ≡ (¬A∨¬B); in intuitionism - no so fast.

In intuitionism you can only prove that (¬A∨¬B) ⊢ ¬(A∧B) , but not the opposite way.

Example? Use the semantics trick from Wikipedia.

Namely, take the partial order of open subsets of the set of real numbers between 0 and 1, [0,1]. It is a Heyting algebra (for an obvious reason not worth discussing here. Negation is defined like this: take a set complement, and take its interior (the largest open subset). So, e.g. for [(a,b)], the complement is [0,a)∪(b,1]

Now take A=[0,0.5) and B=(0.5,1]. These two sets don't intersect, so their conjunction is , and ¬(A∧B) is .

Actually, A = ¬B and B = ¬A.

And their disjunction, A∨B, is [0,0.5)∪(0.5,1], which is not .

Q.E.D. We have an example where this equivalence from Boolean logic, ¬(A∧B) ≡ (¬A∨¬B), does not hold. Profit!

If somebody can show me an example from a finite lattice, that would be very cool. The example I had in my book is just wrong.
juan_gandhi: (Default)
Вот простые два теста на Котлине.

    Stream.of<String>().forAll {   it.isEmpty() } shouldBe true
Stream.of<String>().forAll { !it.isEmpty() } shouldBe true



Не очень понятно, как это непрофессионалы воспримут. Моим студентам требуются усилия обычно.





juan_gandhi: (Default)
 So, if you think you call a function in your code, and this function returns current time, or a random number... IT'S NOT A FUNCTION. Your code is function of "random number", or "time".

So, if your code is written as something that retrieves this kind of data, to test your code, you should provide that data. Not just today, but try the time, like 10 years from now. As to "random", You provide the randomness. If your code cannot be fixed to behave as a function of those inputs, make your "random stream" or "time stream" not hard-coded, but substitutable. Mockable. And mock it in your tests. MAKE SURE that you don't provide just happy-path data. Provide anything. A sequence of 100 numbers 4 for random. Time that is 10 years from now. Or even 30 yeas from now.

Make sure that your tests don't depend on anything. Because test Must Be Reproducible.

All these things, I know, are obvious to some, and not obvious to others.

If you still have questions, ask. But don't argue. Because what I say is math. Unless you have another math (some people do), or another logic (there's plenty of them), please don't argue.

I'd be glad to see how all this changes if logic is e.g. linear. 

 

juan_gandhi: (Default)
http://math.andrej.com/2008/02/02/the-hydra-game/

My question actually is this: can Goodstein sequence to be proven to stop in a lambda calculus? Naively, it should be.

And another question: can set theory be modeled in lambda? (Or should I reread Dana Scott's works?)
juan_gandhi: (Default)
Закончил свою логику. (Ну там тесты еще всю ночь побегут, но это фигня, мираж.)
juan_gandhi: (Default)
 def checkDistributivity(cat: Category): MatchResult[Any] = {
  val topos = new CategoryOfDiagrams(cat)
import topos._
val points = Ω.points

val desc = s"Testing ${cat.name} distributivity laws"
println(desc)

for { pt1 ← points } {
println(s" at ${pt1.tag}")
val p = predicateFor(pt1)

for { pt2 ← points } {
val q = predicateFor(pt2)
val pAndQ = p ∧ q
val pOrQ = p ∨ q

for { pt3 ← points } {
val r = predicateFor(pt3)
// distributivity of conjunction over disjunction
(p ∧ (q ∨ r)) === (pAndQ ∨ (p ∧ r))
// distributivity of disjunction over conjunction
(p ∨ (q ∧ r)) === (pOrQ ∧ (p ∨ r))
}
}
}

ok
}
juan_gandhi: (Default)

for { p <- Ω.points } {
val pp = predicateFor(p)
(True ∧ pp) === pp
(False ∧ pp) === False

// idempotence
(pp ∧ pp) === pp

for { q <- Ω.points } {
val pq = predicateFor(q)
val ppq = pp ∧ pq

// commutativity
(pp ∧ pq) === (pq ∧ pp)

for { r <- Ω.points } {
val pr = predicateFor(r)
// associativity
(ppq ∧ pr) === (pp ∧ (pq ∧ pr))
}
}
}

juan_gandhi: (Default)
Knowledge and wisdom, savoir et connaître, ведать и знать, etc. Not only in Indo-European languages, but in may others (e.g. yakut).

For decades I tried to figure out how come this dichotomy is so ubiquitous, although not explicitly recognized.

It turned out it's a part of epistemology. Kant wrote about it. Popper, Quine. And of course it's a part of Navya-Nyaya logic.

(Just reading about "Einstein-Bohr discussions", that's where I found it.)


 

 

juan_gandhi: (Default)
wiki

In short, the set of all ordinals must be an ordinal, hence must be its own element, hence be less than itself.
juan_gandhi: (Default)
Pocket Set Theory 

PST also verifies the:

The well-foundedness of all sets is neither provable nor disprovable in PST.

 
juan_gandhi: (Default)
In the English language, conjunctions come in three basic types: the coordinating conjunctions, the subordinating conjunctions, and the correlative conjunctions. 
juan_gandhi: (Default)
John Baez writes in his tweets (go ahead and look up, or, better, subscribe to his amazing tweets).

Do you know what "continuum hypothesis" is? It's about whether there is an intermediate size set between a countable (ℵ0), for example, natural numbers, and 2^countable (ℵ1). It's been proven over 50 years ago that neither the existence nor the non-existence follows from the axioms of Zermelo-Fraenkel. So, when mathematicians say that they base their absolutely strict and correct theorems on set theory (I don't believe them), we can always ask - which one?

Now the things got more serious.

Suppose you are a serious "machine learning data scientist", and you want to base your tea-leaves guesses on a solid math. That is, figure out the theory behind taking billions of pictures of cats and dogs and detecting cats on them (my former colleagues was focusing on figuring out whether he has a cat or a mouse, and figured that if the fur is uniform gray, the "algorithm" says it's a mouse. Do you have a Russian Blue?)

So, what we do, while "detecting", is a kind of data compression. It's closer to something like mapping, 2^N -> N.

Now, surprise. The feasibility of this operation, in general settings, is equivalent to having a finite number of intermediate sizes between ℵ0 and ℵ1.

Details are here: https://www.nature.com/articles/s42256-018-0002-3

Learnability can be undecidable

"The mathematical foundations of machine learning play a key role in the development of the field. They improve our understanding and provide tools for designing new learning paradigms. The advantages of mathematics, however, sometimes come with a cost. Gödel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate. We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. Our proof is based on the fact the continuum hypothesis cannot be proved nor refuted. We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis. The main idea is to prove an equivalence between learnability and compression."

Profile

juan_gandhi: (Default)
Juan-Carlos Gandhi

May 2025

S M T W T F S
    1 2 3
456 7 8 9 10
11 121314151617
181920 21222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 22nd, 2025 06:44 am
Powered by Dreamwidth Studios